Nov 23 15:00:52 np0005532761 kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 23 15:00:52 np0005532761 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 23 15:00:52 np0005532761 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 15:00:52 np0005532761 kernel: BIOS-provided physical RAM map:
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 23 15:00:52 np0005532761 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 23 15:00:52 np0005532761 kernel: NX (Execute Disable) protection: active
Nov 23 15:00:52 np0005532761 kernel: APIC: Static calls initialized
Nov 23 15:00:52 np0005532761 kernel: SMBIOS 2.8 present.
Nov 23 15:00:52 np0005532761 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 23 15:00:52 np0005532761 kernel: Hypervisor detected: KVM
Nov 23 15:00:52 np0005532761 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 23 15:00:52 np0005532761 kernel: kvm-clock: using sched offset of 9952047345 cycles
Nov 23 15:00:52 np0005532761 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 23 15:00:52 np0005532761 kernel: tsc: Detected 2799.998 MHz processor
Nov 23 15:00:52 np0005532761 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 23 15:00:52 np0005532761 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 23 15:00:52 np0005532761 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 23 15:00:52 np0005532761 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 23 15:00:52 np0005532761 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 23 15:00:52 np0005532761 kernel: Using GB pages for direct mapping
Nov 23 15:00:52 np0005532761 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 23 15:00:52 np0005532761 kernel: ACPI: Early table checksum verification disabled
Nov 23 15:00:52 np0005532761 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 23 15:00:52 np0005532761 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:52 np0005532761 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:52 np0005532761 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:52 np0005532761 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 23 15:00:52 np0005532761 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:52 np0005532761 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:52 np0005532761 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 23 15:00:52 np0005532761 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 23 15:00:52 np0005532761 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 23 15:00:52 np0005532761 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 23 15:00:52 np0005532761 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 23 15:00:52 np0005532761 kernel: No NUMA configuration found
Nov 23 15:00:52 np0005532761 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 23 15:00:52 np0005532761 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 23 15:00:52 np0005532761 kernel: crashkernel reserved: 0x00000000a5000000 - 0x00000000b5000000 (256 MB)
Nov 23 15:00:52 np0005532761 kernel: Zone ranges:
Nov 23 15:00:52 np0005532761 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 23 15:00:52 np0005532761 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 23 15:00:52 np0005532761 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 23 15:00:52 np0005532761 kernel:  Device   empty
Nov 23 15:00:52 np0005532761 kernel: Movable zone start for each node
Nov 23 15:00:52 np0005532761 kernel: Early memory node ranges
Nov 23 15:00:52 np0005532761 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 23 15:00:52 np0005532761 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 23 15:00:52 np0005532761 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 23 15:00:52 np0005532761 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 23 15:00:52 np0005532761 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 23 15:00:52 np0005532761 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 23 15:00:52 np0005532761 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 23 15:00:52 np0005532761 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 23 15:00:52 np0005532761 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 23 15:00:52 np0005532761 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 23 15:00:52 np0005532761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 23 15:00:52 np0005532761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 23 15:00:52 np0005532761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 23 15:00:52 np0005532761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 23 15:00:52 np0005532761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 23 15:00:52 np0005532761 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 23 15:00:52 np0005532761 kernel: TSC deadline timer available
Nov 23 15:00:52 np0005532761 kernel: CPU topo: Max. logical packages:   8
Nov 23 15:00:52 np0005532761 kernel: CPU topo: Max. logical dies:       8
Nov 23 15:00:52 np0005532761 kernel: CPU topo: Max. dies per package:   1
Nov 23 15:00:52 np0005532761 kernel: CPU topo: Max. threads per core:   1
Nov 23 15:00:52 np0005532761 kernel: CPU topo: Num. cores per package:     1
Nov 23 15:00:52 np0005532761 kernel: CPU topo: Num. threads per package:   1
Nov 23 15:00:52 np0005532761 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 23 15:00:52 np0005532761 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 23 15:00:52 np0005532761 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 23 15:00:52 np0005532761 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 23 15:00:52 np0005532761 kernel: Booting paravirtualized kernel on KVM
Nov 23 15:00:52 np0005532761 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 23 15:00:52 np0005532761 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 23 15:00:52 np0005532761 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 23 15:00:52 np0005532761 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 23 15:00:52 np0005532761 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 15:00:52 np0005532761 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 23 15:00:52 np0005532761 kernel: random: crng init done
Nov 23 15:00:52 np0005532761 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: Fallback order for Node 0: 0 
Nov 23 15:00:52 np0005532761 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 23 15:00:52 np0005532761 kernel: Policy zone: Normal
Nov 23 15:00:52 np0005532761 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 23 15:00:52 np0005532761 kernel: software IO TLB: area num 8.
Nov 23 15:00:52 np0005532761 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 23 15:00:52 np0005532761 kernel: ftrace: allocating 49298 entries in 193 pages
Nov 23 15:00:52 np0005532761 kernel: ftrace: allocated 193 pages with 3 groups
Nov 23 15:00:52 np0005532761 kernel: Dynamic Preempt: voluntary
Nov 23 15:00:52 np0005532761 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 23 15:00:52 np0005532761 kernel: rcu: #011RCU event tracing is enabled.
Nov 23 15:00:52 np0005532761 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 23 15:00:52 np0005532761 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 23 15:00:52 np0005532761 kernel: #011Rude variant of Tasks RCU enabled.
Nov 23 15:00:52 np0005532761 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 23 15:00:52 np0005532761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 23 15:00:52 np0005532761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 23 15:00:52 np0005532761 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 15:00:52 np0005532761 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 15:00:52 np0005532761 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 15:00:52 np0005532761 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 23 15:00:52 np0005532761 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 23 15:00:52 np0005532761 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 23 15:00:52 np0005532761 kernel: Console: colour VGA+ 80x25
Nov 23 15:00:52 np0005532761 kernel: printk: console [ttyS0] enabled
Nov 23 15:00:52 np0005532761 kernel: ACPI: Core revision 20230331
Nov 23 15:00:52 np0005532761 kernel: APIC: Switch to symmetric I/O mode setup
Nov 23 15:00:52 np0005532761 kernel: x2apic enabled
Nov 23 15:00:52 np0005532761 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 23 15:00:52 np0005532761 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 23 15:00:52 np0005532761 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 23 15:00:52 np0005532761 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 23 15:00:52 np0005532761 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 23 15:00:52 np0005532761 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 23 15:00:52 np0005532761 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 23 15:00:52 np0005532761 kernel: Spectre V2 : Mitigation: Retpolines
Nov 23 15:00:52 np0005532761 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 23 15:00:52 np0005532761 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 23 15:00:52 np0005532761 kernel: RETBleed: Mitigation: untrained return thunk
Nov 23 15:00:52 np0005532761 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 23 15:00:52 np0005532761 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 23 15:00:52 np0005532761 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 23 15:00:52 np0005532761 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 23 15:00:52 np0005532761 kernel: x86/bugs: return thunk changed
Nov 23 15:00:52 np0005532761 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 23 15:00:52 np0005532761 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 23 15:00:52 np0005532761 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 23 15:00:52 np0005532761 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 23 15:00:52 np0005532761 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 23 15:00:52 np0005532761 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 23 15:00:52 np0005532761 kernel: Freeing SMP alternatives memory: 40K
Nov 23 15:00:52 np0005532761 kernel: pid_max: default: 32768 minimum: 301
Nov 23 15:00:52 np0005532761 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 23 15:00:52 np0005532761 kernel: landlock: Up and running.
Nov 23 15:00:52 np0005532761 kernel: Yama: becoming mindful.
Nov 23 15:00:52 np0005532761 kernel: SELinux:  Initializing.
Nov 23 15:00:52 np0005532761 kernel: LSM support for eBPF active
Nov 23 15:00:52 np0005532761 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 23 15:00:52 np0005532761 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 23 15:00:52 np0005532761 kernel: ... version:                0
Nov 23 15:00:52 np0005532761 kernel: ... bit width:              48
Nov 23 15:00:52 np0005532761 kernel: ... generic registers:      6
Nov 23 15:00:52 np0005532761 kernel: ... value mask:             0000ffffffffffff
Nov 23 15:00:52 np0005532761 kernel: ... max period:             00007fffffffffff
Nov 23 15:00:52 np0005532761 kernel: ... fixed-purpose events:   0
Nov 23 15:00:52 np0005532761 kernel: ... event mask:             000000000000003f
Nov 23 15:00:52 np0005532761 kernel: signal: max sigframe size: 1776
Nov 23 15:00:52 np0005532761 kernel: rcu: Hierarchical SRCU implementation.
Nov 23 15:00:52 np0005532761 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 23 15:00:52 np0005532761 kernel: smp: Bringing up secondary CPUs ...
Nov 23 15:00:52 np0005532761 kernel: smpboot: x86: Booting SMP configuration:
Nov 23 15:00:52 np0005532761 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 23 15:00:52 np0005532761 kernel: smp: Brought up 1 node, 8 CPUs
Nov 23 15:00:52 np0005532761 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 23 15:00:52 np0005532761 kernel: node 0 deferred pages initialised in 10ms
Nov 23 15:00:52 np0005532761 kernel: Memory: 7765936K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616276K reserved, 0K cma-reserved)
Nov 23 15:00:52 np0005532761 kernel: devtmpfs: initialized
Nov 23 15:00:52 np0005532761 kernel: x86/mm: Memory block size: 128MB
Nov 23 15:00:52 np0005532761 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 23 15:00:52 np0005532761 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: pinctrl core: initialized pinctrl subsystem
Nov 23 15:00:52 np0005532761 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 23 15:00:52 np0005532761 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 23 15:00:52 np0005532761 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 23 15:00:52 np0005532761 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 23 15:00:52 np0005532761 kernel: audit: initializing netlink subsys (disabled)
Nov 23 15:00:52 np0005532761 kernel: audit: type=2000 audit(1763928051.108:1): state=initialized audit_enabled=0 res=1
Nov 23 15:00:52 np0005532761 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 23 15:00:52 np0005532761 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 23 15:00:52 np0005532761 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 23 15:00:52 np0005532761 kernel: cpuidle: using governor menu
Nov 23 15:00:52 np0005532761 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 23 15:00:52 np0005532761 kernel: PCI: Using configuration type 1 for base access
Nov 23 15:00:52 np0005532761 kernel: PCI: Using configuration type 1 for extended access
Nov 23 15:00:52 np0005532761 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 23 15:00:52 np0005532761 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 23 15:00:52 np0005532761 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 23 15:00:52 np0005532761 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 23 15:00:52 np0005532761 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 23 15:00:52 np0005532761 kernel: Demotion targets for Node 0: null
Nov 23 15:00:52 np0005532761 kernel: cryptd: max_cpu_qlen set to 1000
Nov 23 15:00:52 np0005532761 kernel: ACPI: Added _OSI(Module Device)
Nov 23 15:00:52 np0005532761 kernel: ACPI: Added _OSI(Processor Device)
Nov 23 15:00:52 np0005532761 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 23 15:00:52 np0005532761 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 23 15:00:52 np0005532761 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 23 15:00:52 np0005532761 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 23 15:00:52 np0005532761 kernel: ACPI: Interpreter enabled
Nov 23 15:00:52 np0005532761 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 23 15:00:52 np0005532761 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 23 15:00:52 np0005532761 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 23 15:00:52 np0005532761 kernel: PCI: Using E820 reservations for host bridge windows
Nov 23 15:00:52 np0005532761 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 23 15:00:52 np0005532761 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 23 15:00:52 np0005532761 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [3] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [4] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [5] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [6] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [7] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [8] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [9] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [10] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [11] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [12] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [13] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [14] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [15] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [16] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [17] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [18] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [19] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [20] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [21] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [22] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [23] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [24] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [25] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [26] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [27] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [28] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [29] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [30] registered
Nov 23 15:00:52 np0005532761 kernel: acpiphp: Slot [31] registered
Nov 23 15:00:52 np0005532761 kernel: PCI host bridge to bus 0000:00
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 23 15:00:52 np0005532761 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 23 15:00:52 np0005532761 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 23 15:00:52 np0005532761 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 23 15:00:52 np0005532761 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 23 15:00:52 np0005532761 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 23 15:00:52 np0005532761 kernel: iommu: Default domain type: Translated
Nov 23 15:00:52 np0005532761 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 23 15:00:52 np0005532761 kernel: SCSI subsystem initialized
Nov 23 15:00:52 np0005532761 kernel: ACPI: bus type USB registered
Nov 23 15:00:52 np0005532761 kernel: usbcore: registered new interface driver usbfs
Nov 23 15:00:52 np0005532761 kernel: usbcore: registered new interface driver hub
Nov 23 15:00:52 np0005532761 kernel: usbcore: registered new device driver usb
Nov 23 15:00:52 np0005532761 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 23 15:00:52 np0005532761 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 23 15:00:52 np0005532761 kernel: PTP clock support registered
Nov 23 15:00:52 np0005532761 kernel: EDAC MC: Ver: 3.0.0
Nov 23 15:00:52 np0005532761 kernel: NetLabel: Initializing
Nov 23 15:00:52 np0005532761 kernel: NetLabel:  domain hash size = 128
Nov 23 15:00:52 np0005532761 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 23 15:00:52 np0005532761 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 23 15:00:52 np0005532761 kernel: PCI: Using ACPI for IRQ routing
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 23 15:00:52 np0005532761 kernel: vgaarb: loaded
Nov 23 15:00:52 np0005532761 kernel: clocksource: Switched to clocksource kvm-clock
Nov 23 15:00:52 np0005532761 kernel: VFS: Disk quotas dquot_6.6.0
Nov 23 15:00:52 np0005532761 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 23 15:00:52 np0005532761 kernel: pnp: PnP ACPI init
Nov 23 15:00:52 np0005532761 kernel: pnp: PnP ACPI: found 5 devices
Nov 23 15:00:52 np0005532761 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 23 15:00:52 np0005532761 kernel: NET: Registered PF_INET protocol family
Nov 23 15:00:52 np0005532761 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 23 15:00:52 np0005532761 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 23 15:00:52 np0005532761 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 23 15:00:52 np0005532761 kernel: NET: Registered PF_XDP protocol family
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 23 15:00:52 np0005532761 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 23 15:00:52 np0005532761 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 23 15:00:52 np0005532761 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 77278 usecs
Nov 23 15:00:52 np0005532761 kernel: PCI: CLS 0 bytes, default 64
Nov 23 15:00:52 np0005532761 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 23 15:00:52 np0005532761 kernel: software IO TLB: mapped [mem 0x00000000bbfdb000-0x00000000bffdb000] (64MB)
Nov 23 15:00:52 np0005532761 kernel: ACPI: bus type thunderbolt registered
Nov 23 15:00:52 np0005532761 kernel: Trying to unpack rootfs image as initramfs...
Nov 23 15:00:52 np0005532761 kernel: Initialise system trusted keyrings
Nov 23 15:00:52 np0005532761 kernel: Key type blacklist registered
Nov 23 15:00:52 np0005532761 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 23 15:00:52 np0005532761 kernel: zbud: loaded
Nov 23 15:00:52 np0005532761 kernel: integrity: Platform Keyring initialized
Nov 23 15:00:52 np0005532761 kernel: integrity: Machine keyring initialized
Nov 23 15:00:52 np0005532761 kernel: Freeing initrd memory: 85868K
Nov 23 15:00:52 np0005532761 kernel: NET: Registered PF_ALG protocol family
Nov 23 15:00:52 np0005532761 kernel: xor: automatically using best checksumming function   avx       
Nov 23 15:00:52 np0005532761 kernel: Key type asymmetric registered
Nov 23 15:00:52 np0005532761 kernel: Asymmetric key parser 'x509' registered
Nov 23 15:00:52 np0005532761 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 23 15:00:52 np0005532761 kernel: io scheduler mq-deadline registered
Nov 23 15:00:52 np0005532761 kernel: io scheduler kyber registered
Nov 23 15:00:52 np0005532761 kernel: io scheduler bfq registered
Nov 23 15:00:52 np0005532761 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 23 15:00:52 np0005532761 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 23 15:00:52 np0005532761 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 23 15:00:52 np0005532761 kernel: ACPI: button: Power Button [PWRF]
Nov 23 15:00:52 np0005532761 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 23 15:00:52 np0005532761 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 23 15:00:52 np0005532761 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 23 15:00:52 np0005532761 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 23 15:00:52 np0005532761 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 23 15:00:52 np0005532761 kernel: Non-volatile memory driver v1.3
Nov 23 15:00:52 np0005532761 kernel: rdac: device handler registered
Nov 23 15:00:52 np0005532761 kernel: hp_sw: device handler registered
Nov 23 15:00:52 np0005532761 kernel: emc: device handler registered
Nov 23 15:00:52 np0005532761 kernel: alua: device handler registered
Nov 23 15:00:52 np0005532761 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 23 15:00:52 np0005532761 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 23 15:00:52 np0005532761 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 23 15:00:52 np0005532761 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 23 15:00:52 np0005532761 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 23 15:00:52 np0005532761 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 23 15:00:52 np0005532761 kernel: usb usb1: Product: UHCI Host Controller
Nov 23 15:00:52 np0005532761 kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 23 15:00:52 np0005532761 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 23 15:00:52 np0005532761 kernel: hub 1-0:1.0: USB hub found
Nov 23 15:00:52 np0005532761 kernel: hub 1-0:1.0: 2 ports detected
Nov 23 15:00:52 np0005532761 kernel: usbcore: registered new interface driver usbserial_generic
Nov 23 15:00:52 np0005532761 kernel: usbserial: USB Serial support registered for generic
Nov 23 15:00:52 np0005532761 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 23 15:00:52 np0005532761 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 23 15:00:52 np0005532761 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 23 15:00:52 np0005532761 kernel: mousedev: PS/2 mouse device common for all mice
Nov 23 15:00:52 np0005532761 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 23 15:00:52 np0005532761 kernel: rtc_cmos 00:04: registered as rtc0
Nov 23 15:00:52 np0005532761 kernel: rtc_cmos 00:04: setting system clock to 2025-11-23T20:00:51 UTC (1763928051)
Nov 23 15:00:52 np0005532761 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 23 15:00:52 np0005532761 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 23 15:00:52 np0005532761 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 23 15:00:52 np0005532761 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 23 15:00:52 np0005532761 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 23 15:00:52 np0005532761 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 23 15:00:52 np0005532761 kernel: usbcore: registered new interface driver usbhid
Nov 23 15:00:52 np0005532761 kernel: usbhid: USB HID core driver
Nov 23 15:00:52 np0005532761 kernel: drop_monitor: Initializing network drop monitor service
Nov 23 15:00:52 np0005532761 kernel: Initializing XFRM netlink socket
Nov 23 15:00:52 np0005532761 kernel: NET: Registered PF_INET6 protocol family
Nov 23 15:00:52 np0005532761 kernel: Segment Routing with IPv6
Nov 23 15:00:52 np0005532761 kernel: NET: Registered PF_PACKET protocol family
Nov 23 15:00:52 np0005532761 kernel: mpls_gso: MPLS GSO support
Nov 23 15:00:52 np0005532761 kernel: IPI shorthand broadcast: enabled
Nov 23 15:00:52 np0005532761 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 23 15:00:52 np0005532761 kernel: AES CTR mode by8 optimization enabled
Nov 23 15:00:52 np0005532761 kernel: sched_clock: Marking stable (1165013422, 155821111)->(1404412893, -83578360)
Nov 23 15:00:52 np0005532761 kernel: registered taskstats version 1
Nov 23 15:00:52 np0005532761 kernel: Loading compiled-in X.509 certificates
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 23 15:00:52 np0005532761 kernel: Demotion targets for Node 0: null
Nov 23 15:00:52 np0005532761 kernel: page_owner is disabled
Nov 23 15:00:52 np0005532761 kernel: Key type .fscrypt registered
Nov 23 15:00:52 np0005532761 kernel: Key type fscrypt-provisioning registered
Nov 23 15:00:52 np0005532761 kernel: Key type big_key registered
Nov 23 15:00:52 np0005532761 kernel: Key type encrypted registered
Nov 23 15:00:52 np0005532761 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 23 15:00:52 np0005532761 kernel: Loading compiled-in module X.509 certificates
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 23 15:00:52 np0005532761 kernel: ima: Allocated hash algorithm: sha256
Nov 23 15:00:52 np0005532761 kernel: ima: No architecture policies found
Nov 23 15:00:52 np0005532761 kernel: evm: Initialising EVM extended attributes:
Nov 23 15:00:52 np0005532761 kernel: evm: security.selinux
Nov 23 15:00:52 np0005532761 kernel: evm: security.SMACK64 (disabled)
Nov 23 15:00:52 np0005532761 kernel: evm: security.SMACK64EXEC (disabled)
Nov 23 15:00:52 np0005532761 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 23 15:00:52 np0005532761 kernel: evm: security.SMACK64MMAP (disabled)
Nov 23 15:00:52 np0005532761 kernel: evm: security.apparmor (disabled)
Nov 23 15:00:52 np0005532761 kernel: evm: security.ima
Nov 23 15:00:52 np0005532761 kernel: evm: security.capability
Nov 23 15:00:52 np0005532761 kernel: evm: HMAC attrs: 0x1
Nov 23 15:00:52 np0005532761 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 23 15:00:52 np0005532761 kernel: Running certificate verification RSA selftest
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 23 15:00:52 np0005532761 kernel: Running certificate verification ECDSA selftest
Nov 23 15:00:52 np0005532761 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 23 15:00:52 np0005532761 kernel: clk: Disabling unused clocks
Nov 23 15:00:52 np0005532761 kernel: Freeing unused decrypted memory: 2028K
Nov 23 15:00:52 np0005532761 kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 23 15:00:52 np0005532761 kernel: Write protecting the kernel read-only data: 30720k
Nov 23 15:00:52 np0005532761 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 23 15:00:52 np0005532761 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 23 15:00:52 np0005532761 kernel: Run /init as init process
Nov 23 15:00:52 np0005532761 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 15:00:52 np0005532761 systemd: Detected virtualization kvm.
Nov 23 15:00:52 np0005532761 systemd: Detected architecture x86-64.
Nov 23 15:00:52 np0005532761 systemd: Running in initrd.
Nov 23 15:00:52 np0005532761 systemd: No hostname configured, using default hostname.
Nov 23 15:00:52 np0005532761 systemd: Hostname set to <localhost>.
Nov 23 15:00:52 np0005532761 systemd: Initializing machine ID from VM UUID.
Nov 23 15:00:52 np0005532761 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 23 15:00:52 np0005532761 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 23 15:00:52 np0005532761 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 23 15:00:52 np0005532761 kernel: usb 1-1: Manufacturer: QEMU
Nov 23 15:00:52 np0005532761 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 23 15:00:52 np0005532761 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 23 15:00:52 np0005532761 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 23 15:00:52 np0005532761 systemd: Queued start job for default target Initrd Default Target.
Nov 23 15:00:52 np0005532761 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 15:00:52 np0005532761 systemd: Reached target Local Encrypted Volumes.
Nov 23 15:00:52 np0005532761 systemd: Reached target Initrd /usr File System.
Nov 23 15:00:52 np0005532761 systemd: Reached target Local File Systems.
Nov 23 15:00:52 np0005532761 systemd: Reached target Path Units.
Nov 23 15:00:52 np0005532761 systemd: Reached target Slice Units.
Nov 23 15:00:52 np0005532761 systemd: Reached target Swaps.
Nov 23 15:00:52 np0005532761 systemd: Reached target Timer Units.
Nov 23 15:00:52 np0005532761 systemd: Listening on D-Bus System Message Bus Socket.
Nov 23 15:00:52 np0005532761 systemd: Listening on Journal Socket (/dev/log).
Nov 23 15:00:52 np0005532761 systemd: Listening on Journal Socket.
Nov 23 15:00:52 np0005532761 systemd: Listening on udev Control Socket.
Nov 23 15:00:52 np0005532761 systemd: Listening on udev Kernel Socket.
Nov 23 15:00:52 np0005532761 systemd: Reached target Socket Units.
Nov 23 15:00:52 np0005532761 systemd: Starting Create List of Static Device Nodes...
Nov 23 15:00:52 np0005532761 systemd: Starting Journal Service...
Nov 23 15:00:52 np0005532761 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 23 15:00:52 np0005532761 systemd: Starting Apply Kernel Variables...
Nov 23 15:00:52 np0005532761 systemd: Starting Create System Users...
Nov 23 15:00:52 np0005532761 systemd: Starting Setup Virtual Console...
Nov 23 15:00:52 np0005532761 systemd: Finished Create List of Static Device Nodes.
Nov 23 15:00:52 np0005532761 systemd: Finished Apply Kernel Variables.
Nov 23 15:00:52 np0005532761 systemd: Finished Create System Users.
Nov 23 15:00:52 np0005532761 systemd-journald[306]: Journal started
Nov 23 15:00:52 np0005532761 systemd-journald[306]: Runtime Journal (/run/log/journal/96c43856d9f24184a050b9dc5065d3a6) is 8.0M, max 153.6M, 145.6M free.
Nov 23 15:00:52 np0005532761 systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 23 15:00:52 np0005532761 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 23 15:00:52 np0005532761 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 23 15:00:52 np0005532761 systemd: Started Journal Service.
Nov 23 15:00:52 np0005532761 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 23 15:00:52 np0005532761 systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 15:00:52 np0005532761 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 15:00:52 np0005532761 systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 15:00:52 np0005532761 systemd[1]: Finished Setup Virtual Console.
Nov 23 15:00:52 np0005532761 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 23 15:00:52 np0005532761 systemd[1]: Starting dracut cmdline hook...
Nov 23 15:00:52 np0005532761 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Nov 23 15:00:52 np0005532761 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 15:00:52 np0005532761 systemd[1]: Finished dracut cmdline hook.
Nov 23 15:00:52 np0005532761 systemd[1]: Starting dracut pre-udev hook...
Nov 23 15:00:52 np0005532761 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 23 15:00:52 np0005532761 kernel: device-mapper: uevent: version 1.0.3
Nov 23 15:00:52 np0005532761 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 23 15:00:52 np0005532761 kernel: RPC: Registered named UNIX socket transport module.
Nov 23 15:00:52 np0005532761 kernel: RPC: Registered udp transport module.
Nov 23 15:00:52 np0005532761 kernel: RPC: Registered tcp transport module.
Nov 23 15:00:52 np0005532761 kernel: RPC: Registered tcp-with-tls transport module.
Nov 23 15:00:52 np0005532761 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 23 15:00:52 np0005532761 rpc.statd[443]: Version 2.5.4 starting
Nov 23 15:00:52 np0005532761 rpc.statd[443]: Initializing NSM state
Nov 23 15:00:52 np0005532761 rpc.idmapd[448]: Setting log level to 0
Nov 23 15:00:52 np0005532761 systemd[1]: Finished dracut pre-udev hook.
Nov 23 15:00:52 np0005532761 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 15:00:52 np0005532761 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 15:00:52 np0005532761 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 15:00:52 np0005532761 systemd[1]: Starting dracut pre-trigger hook...
Nov 23 15:00:52 np0005532761 systemd[1]: Finished dracut pre-trigger hook.
Nov 23 15:00:52 np0005532761 systemd[1]: Starting Coldplug All udev Devices...
Nov 23 15:00:52 np0005532761 systemd[1]: Created slice Slice /system/modprobe.
Nov 23 15:00:52 np0005532761 systemd[1]: Starting Load Kernel Module configfs...
Nov 23 15:00:52 np0005532761 systemd[1]: Finished Coldplug All udev Devices.
Nov 23 15:00:52 np0005532761 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 15:00:52 np0005532761 systemd[1]: Finished Load Kernel Module configfs.
Nov 23 15:00:52 np0005532761 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 15:00:52 np0005532761 systemd[1]: Reached target Network.
Nov 23 15:00:52 np0005532761 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 15:00:52 np0005532761 systemd[1]: Starting dracut initqueue hook...
Nov 23 15:00:52 np0005532761 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 23 15:00:52 np0005532761 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 23 15:00:52 np0005532761 kernel: vda: vda1
Nov 23 15:00:53 np0005532761 kernel: scsi host0: ata_piix
Nov 23 15:00:53 np0005532761 kernel: scsi host1: ata_piix
Nov 23 15:00:53 np0005532761 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 23 15:00:53 np0005532761 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 23 15:00:53 np0005532761 systemd-udevd[498]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:00:53 np0005532761 systemd[1]: Mounting Kernel Configuration File System...
Nov 23 15:00:53 np0005532761 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 23 15:00:53 np0005532761 systemd[1]: Mounted Kernel Configuration File System.
Nov 23 15:00:53 np0005532761 systemd[1]: Reached target Initrd Root Device.
Nov 23 15:00:53 np0005532761 systemd[1]: Reached target System Initialization.
Nov 23 15:00:53 np0005532761 systemd[1]: Reached target Basic System.
Nov 23 15:00:53 np0005532761 kernel: ata1: found unknown device (class 0)
Nov 23 15:00:53 np0005532761 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 23 15:00:53 np0005532761 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 23 15:00:53 np0005532761 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 23 15:00:53 np0005532761 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 23 15:00:53 np0005532761 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 23 15:00:53 np0005532761 systemd[1]: Finished dracut initqueue hook.
Nov 23 15:00:53 np0005532761 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 15:00:53 np0005532761 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 23 15:00:53 np0005532761 systemd[1]: Reached target Remote File Systems.
Nov 23 15:00:53 np0005532761 systemd[1]: Starting dracut pre-mount hook...
Nov 23 15:00:53 np0005532761 systemd[1]: Finished dracut pre-mount hook.
Nov 23 15:00:53 np0005532761 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 23 15:00:53 np0005532761 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Nov 23 15:00:53 np0005532761 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 23 15:00:53 np0005532761 systemd[1]: Mounting /sysroot...
Nov 23 15:00:53 np0005532761 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 23 15:00:53 np0005532761 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 23 15:02:23 np0005532761 systemd[1]: sysroot.mount: Mounting timed out. Terminating.
Nov 23 15:02:35 np0005532761 kernel: XFS (vda1): Ending clean mount
Nov 23 15:02:47 np0005532761 systemd[1]: sysroot.mount: Mount process exited, code=killed, status=15/TERM
Nov 23 15:02:47 np0005532761 systemd[1]: Mounted /sysroot.
Nov 23 15:02:47 np0005532761 systemd[1]: Reached target Initrd Root File System.
Nov 23 15:02:47 np0005532761 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 23 15:02:47 np0005532761 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 23 15:02:47 np0005532761 systemd[1]: Reached target Initrd File Systems.
Nov 23 15:02:47 np0005532761 systemd[1]: Reached target Initrd Default Target.
Nov 23 15:02:47 np0005532761 systemd[1]: Starting dracut mount hook...
Nov 23 15:02:47 np0005532761 systemd[1]: Finished dracut mount hook.
Nov 23 15:02:47 np0005532761 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 23 15:02:47 np0005532761 rpc.idmapd[448]: exiting on signal 15
Nov 23 15:02:47 np0005532761 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 23 15:02:47 np0005532761 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Network.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Timer Units.
Nov 23 15:02:47 np0005532761 systemd[1]: dbus.socket: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 23 15:02:47 np0005532761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Initrd Default Target.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Basic System.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Initrd Root Device.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Initrd /usr File System.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Path Units.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Remote File Systems.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Slice Units.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Socket Units.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target System Initialization.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Local File Systems.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Swaps.
Nov 23 15:02:47 np0005532761 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped dracut mount hook.
Nov 23 15:02:47 np0005532761 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped dracut pre-mount hook.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 23 15:02:47 np0005532761 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped dracut initqueue hook.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Apply Kernel Variables.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Coldplug All udev Devices.
Nov 23 15:02:47 np0005532761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped dracut pre-trigger hook.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Setup Virtual Console.
Nov 23 15:02:47 np0005532761 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Closed udev Control Socket.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Closed udev Kernel Socket.
Nov 23 15:02:47 np0005532761 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped dracut pre-udev hook.
Nov 23 15:02:47 np0005532761 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped dracut cmdline hook.
Nov 23 15:02:47 np0005532761 systemd[1]: Starting Cleanup udev Database...
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 23 15:02:47 np0005532761 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 23 15:02:47 np0005532761 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Stopped Create System Users.
Nov 23 15:02:47 np0005532761 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 23 15:02:47 np0005532761 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 23 15:02:47 np0005532761 systemd[1]: Finished Cleanup udev Database.
Nov 23 15:02:47 np0005532761 systemd[1]: Reached target Switch Root.
Nov 23 15:02:47 np0005532761 systemd[1]: Starting Switch Root...
Nov 23 15:02:47 np0005532761 systemd[1]: Switching root.
Nov 23 15:02:47 np0005532761 systemd-journald[306]: Journal stopped
Nov 23 15:02:49 np0005532761 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 23 15:02:49 np0005532761 kernel: audit: type=1404 audit(1763928168.121:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 23 15:02:49 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:02:49 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:02:49 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:02:49 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:02:49 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:02:49 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:02:49 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:02:49 np0005532761 kernel: audit: type=1403 audit(1763928168.313:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 23 15:02:49 np0005532761 systemd: Successfully loaded SELinux policy in 196.465ms.
Nov 23 15:02:49 np0005532761 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.271ms.
Nov 23 15:02:49 np0005532761 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 15:02:49 np0005532761 systemd: Detected virtualization kvm.
Nov 23 15:02:49 np0005532761 systemd: Detected architecture x86-64.
Nov 23 15:02:49 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:02:49 np0005532761 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 23 15:02:49 np0005532761 systemd: Stopped Switch Root.
Nov 23 15:02:49 np0005532761 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 23 15:02:49 np0005532761 systemd: Created slice Slice /system/getty.
Nov 23 15:02:49 np0005532761 systemd: Created slice Slice /system/serial-getty.
Nov 23 15:02:49 np0005532761 systemd: Created slice Slice /system/sshd-keygen.
Nov 23 15:02:49 np0005532761 systemd: Created slice User and Session Slice.
Nov 23 15:02:49 np0005532761 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 15:02:49 np0005532761 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 23 15:02:49 np0005532761 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 23 15:02:49 np0005532761 systemd: Reached target Local Encrypted Volumes.
Nov 23 15:02:49 np0005532761 systemd: Stopped target Switch Root.
Nov 23 15:02:49 np0005532761 systemd: Stopped target Initrd File Systems.
Nov 23 15:02:49 np0005532761 systemd: Stopped target Initrd Root File System.
Nov 23 15:02:49 np0005532761 systemd: Reached target Local Integrity Protected Volumes.
Nov 23 15:02:49 np0005532761 systemd: Reached target Path Units.
Nov 23 15:02:49 np0005532761 systemd: Reached target rpc_pipefs.target.
Nov 23 15:02:49 np0005532761 systemd: Reached target Slice Units.
Nov 23 15:02:49 np0005532761 systemd: Reached target Swaps.
Nov 23 15:02:49 np0005532761 systemd: Reached target Local Verity Protected Volumes.
Nov 23 15:02:49 np0005532761 systemd: Listening on RPCbind Server Activation Socket.
Nov 23 15:02:49 np0005532761 systemd: Reached target RPC Port Mapper.
Nov 23 15:02:49 np0005532761 systemd: Listening on Process Core Dump Socket.
Nov 23 15:02:49 np0005532761 systemd: Listening on initctl Compatibility Named Pipe.
Nov 23 15:02:49 np0005532761 systemd: Listening on udev Control Socket.
Nov 23 15:02:49 np0005532761 systemd: Listening on udev Kernel Socket.
Nov 23 15:02:49 np0005532761 systemd: Mounting Huge Pages File System...
Nov 23 15:02:49 np0005532761 systemd: Mounting POSIX Message Queue File System...
Nov 23 15:02:49 np0005532761 systemd: Mounting Kernel Debug File System...
Nov 23 15:02:49 np0005532761 systemd: Mounting Kernel Trace File System...
Nov 23 15:02:49 np0005532761 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 15:02:49 np0005532761 systemd: Starting Create List of Static Device Nodes...
Nov 23 15:02:49 np0005532761 systemd: Starting Load Kernel Module configfs...
Nov 23 15:02:49 np0005532761 systemd: Starting Load Kernel Module drm...
Nov 23 15:02:49 np0005532761 systemd: Starting Load Kernel Module efi_pstore...
Nov 23 15:02:49 np0005532761 systemd: Starting Load Kernel Module fuse...
Nov 23 15:02:49 np0005532761 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 23 15:02:49 np0005532761 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 23 15:02:49 np0005532761 systemd: Stopped File System Check on Root Device.
Nov 23 15:02:49 np0005532761 systemd: Stopped Journal Service.
Nov 23 15:02:49 np0005532761 kernel: fuse: init (API version 7.37)
Nov 23 15:02:49 np0005532761 systemd: Starting Journal Service...
Nov 23 15:02:49 np0005532761 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 23 15:02:49 np0005532761 systemd: Starting Generate network units from Kernel command line...
Nov 23 15:02:49 np0005532761 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 15:02:49 np0005532761 systemd: Starting Remount Root and Kernel File Systems...
Nov 23 15:02:49 np0005532761 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 23 15:02:49 np0005532761 systemd: Starting Apply Kernel Variables...
Nov 23 15:02:49 np0005532761 systemd: Starting Coldplug All udev Devices...
Nov 23 15:02:49 np0005532761 systemd: Mounted Huge Pages File System.
Nov 23 15:02:49 np0005532761 systemd: Mounted POSIX Message Queue File System.
Nov 23 15:02:49 np0005532761 systemd: Mounted Kernel Debug File System.
Nov 23 15:02:49 np0005532761 systemd: Mounted Kernel Trace File System.
Nov 23 15:02:49 np0005532761 systemd: Finished Create List of Static Device Nodes.
Nov 23 15:02:49 np0005532761 systemd: modprobe@configfs.service: Deactivated successfully.
Nov 23 15:02:49 np0005532761 systemd: Finished Load Kernel Module configfs.
Nov 23 15:02:49 np0005532761 systemd: modprobe@efi_pstore.service: Deactivated successfully.
Nov 23 15:02:49 np0005532761 systemd: Finished Load Kernel Module efi_pstore.
Nov 23 15:02:49 np0005532761 systemd: modprobe@fuse.service: Deactivated successfully.
Nov 23 15:02:49 np0005532761 systemd: Finished Load Kernel Module fuse.
Nov 23 15:02:49 np0005532761 kernel: ACPI: bus type drm_connector registered
Nov 23 15:02:49 np0005532761 systemd: modprobe@drm.service: Deactivated successfully.
Nov 23 15:02:49 np0005532761 systemd: Finished Load Kernel Module drm.
Nov 23 15:02:49 np0005532761 systemd: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 23 15:02:49 np0005532761 systemd: Finished Generate network units from Kernel command line.
Nov 23 15:02:49 np0005532761 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 23 15:02:49 np0005532761 systemd: Mounting FUSE Control File System...
Nov 23 15:02:49 np0005532761 systemd-journald[680]: Journal started
Nov 23 15:02:49 np0005532761 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 23 15:02:49 np0005532761 systemd[1]: Queued start job for default target Multi-User System.
Nov 23 15:02:49 np0005532761 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 23 15:02:49 np0005532761 systemd: Started Journal Service.
Nov 23 15:02:49 np0005532761 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 23 15:02:49 np0005532761 systemd[1]: Mounted FUSE Control File System.
Nov 23 15:02:49 np0005532761 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 15:02:49 np0005532761 systemd[1]: Starting Rebuild Hardware Database...
Nov 23 15:02:49 np0005532761 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 23 15:02:49 np0005532761 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 23 15:02:49 np0005532761 systemd[1]: Starting Load/Save OS Random Seed...
Nov 23 15:02:49 np0005532761 systemd[1]: Starting Create System Users...
Nov 23 15:02:49 np0005532761 systemd[1]: Finished Apply Kernel Variables.
Nov 23 15:02:49 np0005532761 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 23 15:02:49 np0005532761 systemd-journald[680]: Received client request to flush runtime journal.
Nov 23 15:02:49 np0005532761 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 23 15:02:49 np0005532761 systemd[1]: Finished Load/Save OS Random Seed.
Nov 23 15:02:49 np0005532761 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 15:02:49 np0005532761 systemd[1]: Finished Create System Users.
Nov 23 15:02:49 np0005532761 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 23 15:02:49 np0005532761 systemd[1]: Finished Coldplug All udev Devices.
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 15:02:50 np0005532761 systemd[1]: Reached target Preparation for Local File Systems.
Nov 23 15:02:50 np0005532761 systemd[1]: Reached target Local File Systems.
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 23 15:02:50 np0005532761 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 23 15:02:50 np0005532761 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 23 15:02:50 np0005532761 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Automatic Boot Loader Update...
Nov 23 15:02:50 np0005532761 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 15:02:50 np0005532761 bootctl[696]: Couldn't find EFI system partition, skipping.
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Automatic Boot Loader Update.
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Security Auditing Service...
Nov 23 15:02:50 np0005532761 systemd[1]: Starting RPC Bind...
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Rebuild Journal Catalog...
Nov 23 15:02:50 np0005532761 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 23 15:02:50 np0005532761 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Rebuild Journal Catalog.
Nov 23 15:02:50 np0005532761 systemd[1]: Started RPC Bind.
Nov 23 15:02:50 np0005532761 augenrules[707]: /sbin/augenrules: No change
Nov 23 15:02:50 np0005532761 augenrules[722]: No rules
Nov 23 15:02:50 np0005532761 augenrules[722]: enabled 1
Nov 23 15:02:50 np0005532761 augenrules[722]: failure 1
Nov 23 15:02:50 np0005532761 augenrules[722]: pid 702
Nov 23 15:02:50 np0005532761 augenrules[722]: rate_limit 0
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_limit 8192
Nov 23 15:02:50 np0005532761 augenrules[722]: lost 0
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog 2
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_wait_time 60000
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_wait_time_actual 0
Nov 23 15:02:50 np0005532761 augenrules[722]: enabled 1
Nov 23 15:02:50 np0005532761 augenrules[722]: failure 1
Nov 23 15:02:50 np0005532761 augenrules[722]: pid 702
Nov 23 15:02:50 np0005532761 augenrules[722]: rate_limit 0
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_limit 8192
Nov 23 15:02:50 np0005532761 augenrules[722]: lost 0
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog 2
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_wait_time 60000
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_wait_time_actual 0
Nov 23 15:02:50 np0005532761 augenrules[722]: enabled 1
Nov 23 15:02:50 np0005532761 augenrules[722]: failure 1
Nov 23 15:02:50 np0005532761 augenrules[722]: pid 702
Nov 23 15:02:50 np0005532761 augenrules[722]: rate_limit 0
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_limit 8192
Nov 23 15:02:50 np0005532761 augenrules[722]: lost 0
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog 3
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_wait_time 60000
Nov 23 15:02:50 np0005532761 augenrules[722]: backlog_wait_time_actual 0
Nov 23 15:02:50 np0005532761 systemd[1]: Started Security Auditing Service.
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Rebuild Hardware Database.
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 15:02:50 np0005532761 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 15:02:50 np0005532761 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Load Kernel Module configfs...
Nov 23 15:02:50 np0005532761 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Load Kernel Module configfs.
Nov 23 15:02:50 np0005532761 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 23 15:02:50 np0005532761 systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 23 15:02:50 np0005532761 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 23 15:02:50 np0005532761 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 23 15:02:50 np0005532761 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 23 15:02:50 np0005532761 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 23 15:02:50 np0005532761 systemd[1]: Starting Update is Completed...
Nov 23 15:02:50 np0005532761 kernel: kvm_amd: TSC scaling supported
Nov 23 15:02:50 np0005532761 kernel: kvm_amd: Nested Virtualization enabled
Nov 23 15:02:50 np0005532761 kernel: kvm_amd: Nested Paging enabled
Nov 23 15:02:50 np0005532761 kernel: kvm_amd: LBR virtualization supported
Nov 23 15:02:50 np0005532761 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 23 15:02:50 np0005532761 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 23 15:02:50 np0005532761 kernel: Console: switching to colour dummy device 80x25
Nov 23 15:02:50 np0005532761 systemd[1]: Finished Update is Completed.
Nov 23 15:02:50 np0005532761 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 23 15:02:50 np0005532761 kernel: [drm] features: -context_init
Nov 23 15:02:50 np0005532761 kernel: [drm] number of scanouts: 1
Nov 23 15:02:50 np0005532761 kernel: [drm] number of cap sets: 0
Nov 23 15:02:50 np0005532761 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 23 15:02:50 np0005532761 systemd[1]: Reached target System Initialization.
Nov 23 15:02:50 np0005532761 systemd[1]: Started dnf makecache --timer.
Nov 23 15:02:50 np0005532761 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 23 15:02:50 np0005532761 kernel: Console: switching to colour frame buffer device 128x48
Nov 23 15:02:50 np0005532761 systemd[1]: Started Daily rotation of log files.
Nov 23 15:02:50 np0005532761 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 23 15:02:50 np0005532761 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 23 15:02:50 np0005532761 systemd[1]: Reached target Timer Units.
Nov 23 15:02:50 np0005532761 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 23 15:02:50 np0005532761 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 23 15:02:50 np0005532761 systemd[1]: Reached target Socket Units.
Nov 23 15:02:51 np0005532761 systemd[1]: Starting D-Bus System Message Bus...
Nov 23 15:02:51 np0005532761 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 15:02:51 np0005532761 systemd[1]: Started D-Bus System Message Bus.
Nov 23 15:02:51 np0005532761 dbus-broker-lau[787]: Ready
Nov 23 15:02:51 np0005532761 systemd[1]: Reached target Basic System.
Nov 23 15:02:51 np0005532761 systemd[1]: Starting NTP client/server...
Nov 23 15:02:51 np0005532761 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 23 15:02:51 np0005532761 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 23 15:02:51 np0005532761 systemd[1]: Starting IPv4 firewall with iptables...
Nov 23 15:02:51 np0005532761 systemd[1]: Started irqbalance daemon.
Nov 23 15:02:51 np0005532761 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 23 15:02:51 np0005532761 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:02:51 np0005532761 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:02:51 np0005532761 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:02:51 np0005532761 systemd[1]: Reached target sshd-keygen.target.
Nov 23 15:02:51 np0005532761 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 23 15:02:51 np0005532761 systemd[1]: Reached target User and Group Name Lookups.
Nov 23 15:02:51 np0005532761 systemd[1]: Starting User Login Management...
Nov 23 15:02:51 np0005532761 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 23 15:02:51 np0005532761 chronyd[829]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 15:02:51 np0005532761 chronyd[829]: Loaded 0 symmetric keys
Nov 23 15:02:51 np0005532761 chronyd[829]: Using right/UTC timezone to obtain leap second data
Nov 23 15:02:51 np0005532761 chronyd[829]: Loaded seccomp filter (level 2)
Nov 23 15:02:51 np0005532761 systemd[1]: Started NTP client/server.
Nov 23 15:02:51 np0005532761 systemd-logind[820]: New seat seat0.
Nov 23 15:02:51 np0005532761 systemd-logind[820]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 15:02:51 np0005532761 systemd-logind[820]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 15:02:51 np0005532761 systemd[1]: Started User Login Management.
Nov 23 15:02:51 np0005532761 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 23 15:02:51 np0005532761 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 23 15:02:51 np0005532761 iptables.init[815]: iptables: Applying firewall rules: [  OK  ]
Nov 23 15:02:51 np0005532761 systemd[1]: Finished IPv4 firewall with iptables.
Nov 23 15:02:51 np0005532761 cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sun, 23 Nov 2025 20:02:51 +0000. Up 121.44 seconds.
Nov 23 15:02:52 np0005532761 systemd[1]: run-cloud\x2dinit-tmp-tmpdy1qrpzg.mount: Deactivated successfully.
Nov 23 15:02:52 np0005532761 systemd[1]: Starting Hostname Service...
Nov 23 15:02:52 np0005532761 systemd[1]: Started Hostname Service.
Nov 23 15:02:52 np0005532761 systemd-hostnamed[853]: Hostname set to <np0005532761.novalocal> (static)
Nov 23 15:02:52 np0005532761 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 23 15:02:52 np0005532761 systemd[1]: Reached target Preparation for Network.
Nov 23 15:02:52 np0005532761 systemd[1]: Starting Network Manager...
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3430] NetworkManager (version 1.54.1-1.el9) is starting... (boot:0e13931c-c8ad-4220-a705-acddc9fc6540)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3434] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3591] manager[0x555e33900080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3639] hostname: hostname: using hostnamed
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3640] hostname: static hostname changed from (none) to "np0005532761.novalocal"
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3644] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3763] manager[0x555e33900080]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3764] manager[0x555e33900080]: rfkill: WWAN hardware radio set enabled
Nov 23 15:02:52 np0005532761 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3854] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3854] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3854] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3855] manager: Networking is enabled by state file
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3856] settings: Loaded settings plugin: keyfile (internal)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3932] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3956] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3996] dhcp: init: Using DHCP client 'internal'
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.3999] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4013] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4028] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4037] device (lo): Activation: starting connection 'lo' (1662c742-4425-4e5e-b9bb-1cb60d31d330)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4046] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4049] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4078] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4084] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4086] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4088] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4090] device (eth0): carrier: link connected
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4095] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4102] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4108] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4114] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4115] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532761 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4118] manager: NetworkManager state is now CONNECTING
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4139] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532761 systemd[1]: Started Network Manager.
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4160] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4164] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:02:52 np0005532761 systemd[1]: Reached target Network.
Nov 23 15:02:52 np0005532761 systemd[1]: Starting Network Manager Wait Online...
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4235] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Nov 23 15:02:52 np0005532761 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4259] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 15:02:52 np0005532761 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4310] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4318] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4319] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4326] device (lo): Activation: successful, device activated.
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4338] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4340] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4343] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4347] device (eth0): Activation: successful, device activated.
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4353] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 15:02:52 np0005532761 NetworkManager[857]: <info>  [1763928172.4356] manager: startup complete
Nov 23 15:02:52 np0005532761 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 23 15:02:52 np0005532761 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 15:02:52 np0005532761 systemd[1]: Reached target NFS client services.
Nov 23 15:02:52 np0005532761 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 15:02:52 np0005532761 systemd[1]: Reached target Remote File Systems.
Nov 23 15:02:52 np0005532761 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 15:02:52 np0005532761 systemd[1]: Finished Network Manager Wait Online.
Nov 23 15:02:52 np0005532761 systemd[1]: Starting Cloud-init: Network Stage...
Nov 23 15:02:52 np0005532761 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Sun, 23 Nov 2025 20:02:52 +0000. Up 122.36 seconds.
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.5          | 255.255.255.0 | global | fa:16:3e:74:f8:30 |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe74:f830/64 |       .       |  link  | fa:16:3e:74:f8:30 |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 23 15:02:52 np0005532761 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 23 15:02:53 np0005532761 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 15:02:54 np0005532761 cloud-init[921]: Generating public/private rsa key pair.
Nov 23 15:02:54 np0005532761 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 23 15:02:54 np0005532761 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 23 15:02:54 np0005532761 cloud-init[921]: The key fingerprint is:
Nov 23 15:02:54 np0005532761 cloud-init[921]: SHA256:86T0RvTTEqgZj5qGESnaAcUyJfIGssIAtzf1R4yS2fY root@np0005532761.novalocal
Nov 23 15:02:54 np0005532761 cloud-init[921]: The key's randomart image is:
Nov 23 15:02:54 np0005532761 cloud-init[921]: +---[RSA 3072]----+
Nov 23 15:02:54 np0005532761 cloud-init[921]: |X++   .+ o.      |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |BB.. o+.+...     |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |o+* =  oo.+ .    |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |.+ + o   OE. o   |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |. . .   S + + .  |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |     o + B   o   |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |    . + . +      |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |     .   .       |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |                 |
Nov 23 15:02:54 np0005532761 cloud-init[921]: +----[SHA256]-----+
Nov 23 15:02:54 np0005532761 cloud-init[921]: Generating public/private ecdsa key pair.
Nov 23 15:02:54 np0005532761 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 23 15:02:54 np0005532761 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 23 15:02:54 np0005532761 cloud-init[921]: The key fingerprint is:
Nov 23 15:02:54 np0005532761 cloud-init[921]: SHA256:nVK2/vo3wWVIkjUaWrwzF9MojpIfEgeI3wy60xg/7WA root@np0005532761.novalocal
Nov 23 15:02:54 np0005532761 cloud-init[921]: The key's randomart image is:
Nov 23 15:02:54 np0005532761 cloud-init[921]: +---[ECDSA 256]---+
Nov 23 15:02:54 np0005532761 cloud-init[921]: |     . ... .oooo |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |    . o . .o=o=..|
Nov 23 15:02:54 np0005532761 cloud-init[921]: |     o + ++o.= + |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |    o . *+oo= o o|
Nov 23 15:02:54 np0005532761 cloud-init[921]: |     * .S++. = o |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |    + E .o.   o  |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |     o +  .    . |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |        .  .  o  |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |          .oo. . |
Nov 23 15:02:54 np0005532761 cloud-init[921]: +----[SHA256]-----+
Nov 23 15:02:54 np0005532761 cloud-init[921]: Generating public/private ed25519 key pair.
Nov 23 15:02:54 np0005532761 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 23 15:02:54 np0005532761 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 23 15:02:54 np0005532761 cloud-init[921]: The key fingerprint is:
Nov 23 15:02:54 np0005532761 cloud-init[921]: SHA256:2NU0wiB2NDhGQqVUkXDqunEzdJZoyuvIzbtkv0DBgrw root@np0005532761.novalocal
Nov 23 15:02:54 np0005532761 cloud-init[921]: The key's randomart image is:
Nov 23 15:02:54 np0005532761 cloud-init[921]: +--[ED25519 256]--+
Nov 23 15:02:54 np0005532761 cloud-init[921]: |   .==O==o. o    |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |o .. *=o ..+ .   |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |.o oo. .  . .    |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |  o... + .       |
Nov 23 15:02:54 np0005532761 cloud-init[921]: | E .= = S        |
Nov 23 15:02:54 np0005532761 cloud-init[921]: | ..= o           |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |  =++            |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |..==oo           |
Nov 23 15:02:54 np0005532761 cloud-init[921]: |.o+=oo.          |
Nov 23 15:02:54 np0005532761 cloud-init[921]: +----[SHA256]-----+
Nov 23 15:02:54 np0005532761 systemd[1]: Finished Cloud-init: Network Stage.
Nov 23 15:02:54 np0005532761 systemd[1]: Reached target Cloud-config availability.
Nov 23 15:02:54 np0005532761 systemd[1]: Reached target Network is Online.
Nov 23 15:02:54 np0005532761 systemd[1]: Starting Cloud-init: Config Stage...
Nov 23 15:02:54 np0005532761 systemd[1]: Starting Crash recovery kernel arming...
Nov 23 15:02:54 np0005532761 systemd[1]: Starting Notify NFS peers of a restart...
Nov 23 15:02:54 np0005532761 systemd[1]: Starting System Logging Service...
Nov 23 15:02:54 np0005532761 sm-notify[1005]: Version 2.5.4 starting
Nov 23 15:02:54 np0005532761 systemd[1]: Starting OpenSSH server daemon...
Nov 23 15:02:54 np0005532761 systemd[1]: Starting Permit User Sessions...
Nov 23 15:02:54 np0005532761 systemd[1]: Started Notify NFS peers of a restart.
Nov 23 15:02:54 np0005532761 systemd[1]: Started OpenSSH server daemon.
Nov 23 15:02:54 np0005532761 systemd[1]: Finished Permit User Sessions.
Nov 23 15:02:54 np0005532761 systemd[1]: Started Command Scheduler.
Nov 23 15:02:54 np0005532761 systemd[1]: Started Getty on tty1.
Nov 23 15:02:54 np0005532761 systemd[1]: Started Serial Getty on ttyS0.
Nov 23 15:02:54 np0005532761 systemd[1]: Reached target Login Prompts.
Nov 23 15:02:54 np0005532761 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Nov 23 15:02:54 np0005532761 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 23 15:02:54 np0005532761 systemd[1]: Started System Logging Service.
Nov 23 15:02:54 np0005532761 systemd[1]: Reached target Multi-User System.
Nov 23 15:02:54 np0005532761 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 23 15:02:54 np0005532761 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 23 15:02:54 np0005532761 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 23 15:02:54 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:02:54 np0005532761 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Nov 23 15:02:54 np0005532761 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 23 15:02:54 np0005532761 cloud-init[1156]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sun, 23 Nov 2025 20:02:54 +0000. Up 124.37 seconds.
Nov 23 15:02:54 np0005532761 systemd[1]: Finished Cloud-init: Config Stage.
Nov 23 15:02:54 np0005532761 systemd[1]: Starting Cloud-init: Final Stage...
Nov 23 15:02:55 np0005532761 dracut[1285]: dracut-057-102.git20250818.el9
Nov 23 15:02:55 np0005532761 cloud-init[1303]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sun, 23 Nov 2025 20:02:55 +0000. Up 124.76 seconds.
Nov 23 15:02:55 np0005532761 cloud-init[1307]: #############################################################
Nov 23 15:02:55 np0005532761 cloud-init[1310]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 23 15:02:55 np0005532761 cloud-init[1317]: 256 SHA256:nVK2/vo3wWVIkjUaWrwzF9MojpIfEgeI3wy60xg/7WA root@np0005532761.novalocal (ECDSA)
Nov 23 15:02:55 np0005532761 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 23 15:02:55 np0005532761 cloud-init[1323]: 256 SHA256:2NU0wiB2NDhGQqVUkXDqunEzdJZoyuvIzbtkv0DBgrw root@np0005532761.novalocal (ED25519)
Nov 23 15:02:55 np0005532761 cloud-init[1329]: 3072 SHA256:86T0RvTTEqgZj5qGESnaAcUyJfIGssIAtzf1R4yS2fY root@np0005532761.novalocal (RSA)
Nov 23 15:02:55 np0005532761 cloud-init[1331]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 23 15:02:55 np0005532761 cloud-init[1333]: #############################################################
Nov 23 15:02:55 np0005532761 cloud-init[1303]: Cloud-init v. 24.4-7.el9 finished at Sun, 23 Nov 2025 20:02:55 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 124.93 seconds
Nov 23 15:02:55 np0005532761 systemd[1]: Finished Cloud-init: Final Stage.
Nov 23 15:02:55 np0005532761 systemd[1]: Reached target Cloud-init target.
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:55 np0005532761 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: memstrack is not available
Nov 23 15:02:56 np0005532761 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 15:02:56 np0005532761 dracut[1287]: memstrack is not available
Nov 23 15:02:56 np0005532761 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 15:02:56 np0005532761 dracut[1287]: *** Including module: systemd ***
Nov 23 15:02:57 np0005532761 dracut[1287]: *** Including module: fips ***
Nov 23 15:02:57 np0005532761 dracut[1287]: *** Including module: systemd-initrd ***
Nov 23 15:02:57 np0005532761 dracut[1287]: *** Including module: i18n ***
Nov 23 15:02:57 np0005532761 chronyd[829]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Nov 23 15:02:58 np0005532761 chronyd[829]: System clock wrong by 1.327464 seconds
Nov 23 15:02:58 np0005532761 chronyd[829]: System clock was stepped by 1.327464 seconds
Nov 23 15:02:58 np0005532761 chronyd[829]: System clock TAI offset set to 37 seconds
Nov 23 15:02:58 np0005532761 dracut[1287]: *** Including module: drm ***
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: prefixdevname ***
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: kernel-modules ***
Nov 23 15:02:59 np0005532761 kernel: block vda: the capability attribute has been deprecated.
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: kernel-modules-extra ***
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: qemu ***
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: fstab-sys ***
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: rootfs-block ***
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: terminfo ***
Nov 23 15:02:59 np0005532761 dracut[1287]: *** Including module: udev-rules ***
Nov 23 15:03:00 np0005532761 chronyd[829]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 23 15:03:00 np0005532761 dracut[1287]: Skipping udev rule: 91-permissions.rules
Nov 23 15:03:00 np0005532761 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 23 15:03:00 np0005532761 dracut[1287]: *** Including module: virtiofs ***
Nov 23 15:03:00 np0005532761 dracut[1287]: *** Including module: dracut-systemd ***
Nov 23 15:03:00 np0005532761 dracut[1287]: *** Including module: usrmount ***
Nov 23 15:03:00 np0005532761 dracut[1287]: *** Including module: base ***
Nov 23 15:03:00 np0005532761 dracut[1287]: *** Including module: fs-lib ***
Nov 23 15:03:00 np0005532761 dracut[1287]: *** Including module: kdumpbase ***
Nov 23 15:03:01 np0005532761 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 23 15:03:01 np0005532761 dracut[1287]:  microcode_ctl module: mangling fw_dir
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 23 15:03:01 np0005532761 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 23 15:03:01 np0005532761 dracut[1287]: *** Including module: openssl ***
Nov 23 15:03:01 np0005532761 dracut[1287]: *** Including module: shutdown ***
Nov 23 15:03:01 np0005532761 dracut[1287]: *** Including module: squash ***
Nov 23 15:03:02 np0005532761 dracut[1287]: *** Including modules done ***
Nov 23 15:03:02 np0005532761 dracut[1287]: *** Installing kernel module dependencies ***
Nov 23 15:03:02 np0005532761 irqbalance[816]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 23 15:03:02 np0005532761 irqbalance[816]: IRQ 25 affinity is now unmanaged
Nov 23 15:03:02 np0005532761 irqbalance[816]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 23 15:03:02 np0005532761 irqbalance[816]: IRQ 31 affinity is now unmanaged
Nov 23 15:03:02 np0005532761 irqbalance[816]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 23 15:03:02 np0005532761 irqbalance[816]: IRQ 28 affinity is now unmanaged
Nov 23 15:03:02 np0005532761 irqbalance[816]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 23 15:03:02 np0005532761 irqbalance[816]: IRQ 32 affinity is now unmanaged
Nov 23 15:03:02 np0005532761 irqbalance[816]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 23 15:03:02 np0005532761 irqbalance[816]: IRQ 30 affinity is now unmanaged
Nov 23 15:03:02 np0005532761 irqbalance[816]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 23 15:03:02 np0005532761 irqbalance[816]: IRQ 29 affinity is now unmanaged
Nov 23 15:03:02 np0005532761 dracut[1287]: *** Installing kernel module dependencies done ***
Nov 23 15:03:02 np0005532761 dracut[1287]: *** Resolving executable dependencies ***
Nov 23 15:03:03 np0005532761 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:03:05 np0005532761 dracut[1287]: *** Resolving executable dependencies done ***
Nov 23 15:03:05 np0005532761 dracut[1287]: *** Generating early-microcode cpio image ***
Nov 23 15:03:05 np0005532761 dracut[1287]: *** Store current command line parameters ***
Nov 23 15:03:05 np0005532761 dracut[1287]: Stored kernel commandline:
Nov 23 15:03:05 np0005532761 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Nov 23 15:03:06 np0005532761 dracut[1287]: *** Install squash loader ***
Nov 23 15:03:07 np0005532761 dracut[1287]: *** Squashing the files inside the initramfs ***
Nov 23 15:03:08 np0005532761 dracut[1287]: *** Squashing the files inside the initramfs done ***
Nov 23 15:03:08 np0005532761 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 23 15:03:08 np0005532761 dracut[1287]: *** Hardlinking files ***
Nov 23 15:03:08 np0005532761 dracut[1287]: *** Hardlinking files done ***
Nov 23 15:03:08 np0005532761 systemd[1]: Created slice User Slice of UID 1000.
Nov 23 15:03:08 np0005532761 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 23 15:03:08 np0005532761 systemd-logind[820]: New session 1 of user zuul.
Nov 23 15:03:09 np0005532761 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 23 15:03:09 np0005532761 systemd[1]: Starting User Manager for UID 1000...
Nov 23 15:03:09 np0005532761 systemd[4185]: Queued start job for default target Main User Target.
Nov 23 15:03:09 np0005532761 systemd[4185]: Created slice User Application Slice.
Nov 23 15:03:09 np0005532761 systemd[4185]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:03:09 np0005532761 systemd[4185]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:03:09 np0005532761 systemd[4185]: Reached target Paths.
Nov 23 15:03:09 np0005532761 systemd[4185]: Reached target Timers.
Nov 23 15:03:09 np0005532761 systemd[4185]: Starting D-Bus User Message Bus Socket...
Nov 23 15:03:09 np0005532761 systemd[4185]: Starting Create User's Volatile Files and Directories...
Nov 23 15:03:09 np0005532761 systemd[4185]: Finished Create User's Volatile Files and Directories.
Nov 23 15:03:09 np0005532761 systemd[4185]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:03:09 np0005532761 systemd[4185]: Reached target Sockets.
Nov 23 15:03:09 np0005532761 systemd[4185]: Reached target Basic System.
Nov 23 15:03:09 np0005532761 systemd[4185]: Reached target Main User Target.
Nov 23 15:03:09 np0005532761 systemd[4185]: Startup finished in 128ms.
Nov 23 15:03:09 np0005532761 systemd[1]: Started User Manager for UID 1000.
Nov 23 15:03:09 np0005532761 systemd[1]: Started Session 1 of User zuul.
Nov 23 15:03:09 np0005532761 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 23 15:03:10 np0005532761 python3[4274]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:10 np0005532761 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Nov 23 15:03:10 np0005532761 kdumpctl[1017]: kdump: Starting kdump: [OK]
Nov 23 15:03:10 np0005532761 systemd[1]: Finished Crash recovery kernel arming.
Nov 23 15:03:10 np0005532761 systemd[1]: Startup finished in 1.489s (kernel) + 1min 56.230s (initrd) + 20.891s (userspace) = 2min 18.612s.
Nov 23 15:03:13 np0005532761 python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:21 np0005532761 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:22 np0005532761 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 23 15:03:23 np0005532761 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:03:24 np0005532761 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtaVH+Hfp24GC/nLOCl87TIJDf22iIpXaDmkip6hyFZ60lyVpfYxFl6Z4FqAbKci+Ock4NHD78xcKBN+nqpMJyIdLDl6IlqwxWyUc/lX5/TIm6PknK9ykLQzLzQZzRt1Mk1hK89Am3bbY9TVh2ZdujVyOmjWLVqA/0FhkvYKJWaid0pgs6EdTygKGzSfc7V7Zm4ijA+aHyny1AE6h4zzdGP/d6AL8fjaGD/LpcU6DnbbD9WHzrmCJXOyJa5/Ky5sttSY3WpH33eL7o554W1og4Dq5c+z/Pc0NlJT1DXPpxrtrLpJ57vb04Ae1Wg5PeG+MECxQWJRQBS51hNbLb4KTkDErpMaWbfcwdnzisQHazTgjNidmG34/j4ZvJ/NP2OkEBabHukyMvOCFw3Ew9lQ5eR2EiNjFtdvI12kRiXyyk9Ti3dsncy9kfInD5nPUeVGnxbIGdwP/T5Z2crXhgdrIWCRjRMvV/756tjKFXfzl/eIzO6UcLkU2I9qdqZpL0h8U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:24 np0005532761 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:25 np0005532761 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:25 np0005532761 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928205.0165064-251-25392890595416/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b927b3f7e94443b59884cfdc0421ba80_id_rsa follow=False checksum=b8b11f458d3dcaed5d0ce620e052c77faf8a3312 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:26 np0005532761 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:26 np0005532761 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928206.0433679-306-118440953516138/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b927b3f7e94443b59884cfdc0421ba80_id_rsa.pub follow=False checksum=c143f6be1d4420dad576f5c3c6738e84bfb79a9b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:28 np0005532761 python3[4972]: ansible-ping Invoked with data=pong
Nov 23 15:03:29 np0005532761 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:31 np0005532761 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 23 15:03:32 np0005532761 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:32 np0005532761 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:33 np0005532761 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:34 np0005532761 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:34 np0005532761 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:35 np0005532761 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:37 np0005532761 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:37 np0005532761 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:38 np0005532761 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928217.3084831-31-171007878081834/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:39 np0005532761 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:39 np0005532761 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:39 np0005532761 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:39 np0005532761 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:40 np0005532761 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:40 np0005532761 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:40 np0005532761 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:41 np0005532761 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:41 np0005532761 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:41 np0005532761 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:41 np0005532761 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:42 np0005532761 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:42 np0005532761 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:42 np0005532761 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:43 np0005532761 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:43 np0005532761 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:43 np0005532761 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:44 np0005532761 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:44 np0005532761 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:44 np0005532761 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:44 np0005532761 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:45 np0005532761 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:45 np0005532761 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:45 np0005532761 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:46 np0005532761 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:46 np0005532761 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:48 np0005532761 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 15:03:48 np0005532761 systemd[1]: Starting Time & Date Service...
Nov 23 15:03:48 np0005532761 systemd[1]: Started Time & Date Service.
Nov 23 15:03:48 np0005532761 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Nov 23 15:03:49 np0005532761 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:49 np0005532761 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:50 np0005532761 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763928229.4538221-251-20415114692888/source _original_basename=tmp1f55_ylp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:50 np0005532761 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:51 np0005532761 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763928230.3715787-301-197025463121113/source _original_basename=tmpsb_txaah follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:52 np0005532761 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:52 np0005532761 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763928231.697866-381-44997949239670/source _original_basename=tmpjbtssgup follow=False checksum=19d309ebea5b58181725fc1dc4cea95ea4d18865 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:53 np0005532761 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:03:53 np0005532761 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:03:53 np0005532761 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:54 np0005532761 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928233.4652956-451-275682684658842/source _original_basename=tmpgnu236y8 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:54 np0005532761 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-4746-eccf-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:03:55 np0005532761 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-4746-eccf-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 23 15:03:56 np0005532761 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:04:16 np0005532761 python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:04:18 np0005532761 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 23 15:04:59 np0005532761 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 23 15:04:59 np0005532761 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.7972] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 15:04:59 np0005532761 systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8170] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8214] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8221] device (eth1): carrier: link connected
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8224] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8235] policy: auto-activating connection 'Wired connection 1' (02f5066b-c429-3b6f-a7c2-622cf6bd12ad)
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8243] device (eth1): Activation: starting connection 'Wired connection 1' (02f5066b-c429-3b6f-a7c2-622cf6bd12ad)
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8244] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8250] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8256] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:04:59 np0005532761 NetworkManager[857]: <info>  [1763928299.8263] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:00 np0005532761 python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-f412-6632-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:05:10 np0005532761 python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:05:11 np0005532761 python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928310.3482957-104-110726504098428/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=132ab245d81ec338c41a9743865fa176dd208da0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:05:11 np0005532761 python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:05:11 np0005532761 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 23 15:05:11 np0005532761 systemd[1]: Stopped Network Manager Wait Online.
Nov 23 15:05:11 np0005532761 systemd[1]: Stopping Network Manager Wait Online...
Nov 23 15:05:11 np0005532761 NetworkManager[857]: <info>  [1763928311.9960] caught SIGTERM, shutting down normally.
Nov 23 15:05:11 np0005532761 systemd[1]: Stopping Network Manager...
Nov 23 15:05:11 np0005532761 NetworkManager[857]: <info>  [1763928311.9973] dhcp4 (eth0): canceled DHCP transaction
Nov 23 15:05:11 np0005532761 NetworkManager[857]: <info>  [1763928311.9974] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:11 np0005532761 NetworkManager[857]: <info>  [1763928311.9974] dhcp4 (eth0): state changed no lease
Nov 23 15:05:11 np0005532761 NetworkManager[857]: <info>  [1763928311.9979] manager: NetworkManager state is now CONNECTING
Nov 23 15:05:12 np0005532761 NetworkManager[857]: <info>  [1763928312.0048] dhcp4 (eth1): canceled DHCP transaction
Nov 23 15:05:12 np0005532761 NetworkManager[857]: <info>  [1763928312.0049] dhcp4 (eth1): state changed no lease
Nov 23 15:05:12 np0005532761 NetworkManager[857]: <info>  [1763928312.0104] exiting (success)
Nov 23 15:05:12 np0005532761 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:05:12 np0005532761 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:05:12 np0005532761 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 23 15:05:12 np0005532761 systemd[1]: Stopped Network Manager.
Nov 23 15:05:12 np0005532761 systemd[1]: NetworkManager.service: Consumed 1.007s CPU time, 10.0M memory peak.
Nov 23 15:05:12 np0005532761 systemd[1]: Starting Network Manager...
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.0686] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0e13931c-c8ad-4220-a705-acddc9fc6540)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.0689] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.0760] manager[0x55fb6e09e070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 15:05:12 np0005532761 systemd[1]: Starting Hostname Service...
Nov 23 15:05:12 np0005532761 systemd[1]: Started Hostname Service.
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1525] hostname: hostname: using hostnamed
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1529] hostname: static hostname changed from (none) to "np0005532761.novalocal"
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1535] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1540] manager[0x55fb6e09e070]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1540] manager[0x55fb6e09e070]: rfkill: WWAN hardware radio set enabled
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1566] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1566] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1567] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1568] manager: Networking is enabled by state file
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1570] settings: Loaded settings plugin: keyfile (internal)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1574] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1603] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1614] dhcp: init: Using DHCP client 'internal'
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1616] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1622] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1628] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1635] device (lo): Activation: starting connection 'lo' (1662c742-4425-4e5e-b9bb-1cb60d31d330)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1642] device (eth0): carrier: link connected
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1645] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1649] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1650] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1656] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1664] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1671] device (eth1): carrier: link connected
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1674] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1680] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (02f5066b-c429-3b6f-a7c2-622cf6bd12ad) (indicated)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1681] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1686] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1695] device (eth1): Activation: starting connection 'Wired connection 1' (02f5066b-c429-3b6f-a7c2-622cf6bd12ad)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1702] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 15:05:12 np0005532761 systemd[1]: Started Network Manager.
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1707] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1720] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1724] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1727] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1732] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1735] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1738] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1743] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1754] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1758] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1770] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1773] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1795] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1802] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 15:05:12 np0005532761 NetworkManager[7184]: <info>  [1763928312.1810] device (lo): Activation: successful, device activated.
Nov 23 15:05:12 np0005532761 systemd[1]: Starting Network Manager Wait Online...
Nov 23 15:05:12 np0005532761 python3[7240]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-f412-6632-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2528] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2536] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2600] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2645] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2649] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2654] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2659] device (eth0): Activation: successful, device activated.
Nov 23 15:05:14 np0005532761 NetworkManager[7184]: <info>  [1763928314.2666] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 15:05:24 np0005532761 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:05:42 np0005532761 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:05:43 np0005532761 systemd[4185]: Starting Mark boot as successful...
Nov 23 15:05:43 np0005532761 systemd[4185]: Finished Mark boot as successful.
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7020] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:05:57 np0005532761 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:05:57 np0005532761 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7419] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7422] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7432] device (eth1): Activation: successful, device activated.
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7441] manager: startup complete
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7444] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <warn>  [1763928357.7451] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7461] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 23 15:05:57 np0005532761 systemd[1]: Finished Network Manager Wait Online.
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7558] dhcp4 (eth1): canceled DHCP transaction
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7559] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7559] dhcp4 (eth1): state changed no lease
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7581] policy: auto-activating connection 'ci-private-network' (dd7bdc81-1cbe-5063-8ac9-0147b3ade6c0)
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7587] device (eth1): Activation: starting connection 'ci-private-network' (dd7bdc81-1cbe-5063-8ac9-0147b3ade6c0)
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7589] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7592] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7602] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7614] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7663] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7666] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:05:57 np0005532761 NetworkManager[7184]: <info>  [1763928357.7674] device (eth1): Activation: successful, device activated.
Nov 23 15:06:07 np0005532761 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:06:12 np0005532761 systemd-logind[820]: Session 1 logged out. Waiting for processes to exit.
Nov 23 15:07:09 np0005532761 systemd-logind[820]: New session 3 of user zuul.
Nov 23 15:07:09 np0005532761 systemd[1]: Started Session 3 of User zuul.
Nov 23 15:07:10 np0005532761 python3[7374]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:07:10 np0005532761 python3[7447]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928429.8964603-373-130979652278153/source _original_basename=tmp74kzvi7t follow=False checksum=3134bd1d03fba929119b03a893a690ab48d9a2ea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:07:14 np0005532761 systemd[1]: session-3.scope: Deactivated successfully.
Nov 23 15:07:14 np0005532761 systemd-logind[820]: Session 3 logged out. Waiting for processes to exit.
Nov 23 15:07:14 np0005532761 systemd-logind[820]: Removed session 3.
Nov 23 15:08:43 np0005532761 systemd[4185]: Created slice User Background Tasks Slice.
Nov 23 15:08:43 np0005532761 systemd[4185]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 15:08:43 np0005532761 systemd[4185]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 15:12:18 np0005532761 systemd-logind[820]: New session 4 of user zuul.
Nov 23 15:12:18 np0005532761 systemd[1]: Started Session 4 of User zuul.
Nov 23 15:12:18 np0005532761 python3[7527]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-bee1-1da1-000000001cd8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:19 np0005532761 python3[7555]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:19 np0005532761 python3[7582]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:19 np0005532761 python3[7608]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:19 np0005532761 python3[7634]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:20 np0005532761 python3[7660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:21 np0005532761 python3[7738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:12:21 np0005532761 python3[7811]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928741.051732-510-131443933034415/source _original_basename=tmpx6gu1mhe follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:22 np0005532761 python3[7861]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:12:22 np0005532761 systemd[1]: Reloading.
Nov 23 15:12:22 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:12:24 np0005532761 python3[7917]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 23 15:12:24 np0005532761 python3[7943]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:25 np0005532761 python3[7972]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:25 np0005532761 python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:25 np0005532761 python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:26 np0005532761 python3[8055]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-bee1-1da1-000000001cdf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:26 np0005532761 python3[8087]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:12:29 np0005532761 systemd[1]: session-4.scope: Deactivated successfully.
Nov 23 15:12:29 np0005532761 systemd[1]: session-4.scope: Consumed 3.921s CPU time.
Nov 23 15:12:29 np0005532761 systemd-logind[820]: Session 4 logged out. Waiting for processes to exit.
Nov 23 15:12:29 np0005532761 systemd-logind[820]: Removed session 4.
Nov 23 15:12:31 np0005532761 systemd-logind[820]: New session 5 of user zuul.
Nov 23 15:12:31 np0005532761 systemd[1]: Started Session 5 of User zuul.
Nov 23 15:12:31 np0005532761 python3[8120]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 15:12:49 np0005532761 kernel: SELinux:  Converting 385 SID table entries...
Nov 23 15:12:49 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:12:49 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:12:49 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:12:49 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:12:49 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:12:49 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:12:49 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:12:58 np0005532761 kernel: SELinux:  Converting 385 SID table entries...
Nov 23 15:12:58 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:12:58 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:12:58 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:12:58 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:12:58 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:12:58 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:12:58 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:13:07 np0005532761 kernel: SELinux:  Converting 385 SID table entries...
Nov 23 15:13:07 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:13:07 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:13:07 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:13:07 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:13:07 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:13:07 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:13:07 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:13:08 np0005532761 setsebool[8185]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 23 15:13:08 np0005532761 setsebool[8185]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 23 15:13:20 np0005532761 kernel: SELinux:  Converting 388 SID table entries...
Nov 23 15:13:20 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:13:20 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:13:20 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:13:20 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:13:20 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:13:20 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:13:20 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:13:42 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 23 15:13:42 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:13:42 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:13:42 np0005532761 systemd[1]: Reloading.
Nov 23 15:13:42 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:13:42 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:13:46 np0005532761 python3[11420]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ba7b-575b-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:13:47 np0005532761 kernel: evm: overlay not supported
Nov 23 15:13:47 np0005532761 systemd[4185]: Starting D-Bus User Message Bus...
Nov 23 15:13:47 np0005532761 dbus-broker-launch[12191]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 23 15:13:47 np0005532761 dbus-broker-launch[12191]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 23 15:13:47 np0005532761 systemd[4185]: Started D-Bus User Message Bus.
Nov 23 15:13:47 np0005532761 dbus-broker-lau[12191]: Ready
Nov 23 15:13:47 np0005532761 systemd[4185]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 23 15:13:47 np0005532761 systemd[4185]: Created slice Slice /user.
Nov 23 15:13:47 np0005532761 systemd[4185]: podman-12057.scope: unit configures an IP firewall, but not running as root.
Nov 23 15:13:47 np0005532761 systemd[4185]: (This warning is only shown for the first unit using IP firewalling.)
Nov 23 15:13:47 np0005532761 systemd[4185]: Started podman-12057.scope.
Nov 23 15:13:47 np0005532761 systemd[4185]: Started podman-pause-146b2b49.scope.
Nov 23 15:13:47 np0005532761 systemd[1]: session-5.scope: Deactivated successfully.
Nov 23 15:13:47 np0005532761 systemd[1]: session-5.scope: Consumed 59.492s CPU time.
Nov 23 15:13:47 np0005532761 systemd-logind[820]: Session 5 logged out. Waiting for processes to exit.
Nov 23 15:13:47 np0005532761 systemd-logind[820]: Removed session 5.
Nov 23 15:14:02 np0005532761 irqbalance[816]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 23 15:14:02 np0005532761 irqbalance[816]: IRQ 27 affinity is now unmanaged
Nov 23 15:14:07 np0005532761 systemd-logind[820]: New session 6 of user zuul.
Nov 23 15:14:07 np0005532761 systemd[1]: Started Session 6 of User zuul.
Nov 23 15:14:08 np0005532761 python3[21656]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:14:08 np0005532761 python3[21875]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:14:09 np0005532761 python3[22323]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532761.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 23 15:14:10 np0005532761 python3[22567]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:14:10 np0005532761 python3[22846]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:14:10 np0005532761 python3[23081]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928850.221462-150-245832732825813/source _original_basename=tmpl2z8rqho follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:14:11 np0005532761 python3[23442]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 23 15:14:11 np0005532761 systemd[1]: Starting Hostname Service...
Nov 23 15:14:11 np0005532761 systemd[1]: Started Hostname Service.
Nov 23 15:14:11 np0005532761 systemd-hostnamed[23585]: Changed pretty hostname to 'compute-0'
Nov 23 15:14:11 np0005532761 systemd-hostnamed[23585]: Hostname set to <compute-0> (static)
Nov 23 15:14:11 np0005532761 NetworkManager[7184]: <info>  [1763928851.9197] hostname: static hostname changed from "np0005532761.novalocal" to "compute-0"
Nov 23 15:14:11 np0005532761 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:14:11 np0005532761 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:14:12 np0005532761 systemd[1]: session-6.scope: Deactivated successfully.
Nov 23 15:14:12 np0005532761 systemd[1]: session-6.scope: Consumed 2.241s CPU time.
Nov 23 15:14:12 np0005532761 systemd-logind[820]: Session 6 logged out. Waiting for processes to exit.
Nov 23 15:14:12 np0005532761 systemd-logind[820]: Removed session 6.
Nov 23 15:14:21 np0005532761 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:14:32 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:14:32 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:14:32 np0005532761 systemd[1]: man-db-cache-update.service: Consumed 54.157s CPU time.
Nov 23 15:14:32 np0005532761 systemd[1]: run-rc87d6fceb6634078a32f550ba61d885a.service: Deactivated successfully.
Nov 23 15:14:41 np0005532761 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:16:40 np0005532761 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 23 15:16:40 np0005532761 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 23 15:16:40 np0005532761 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 23 15:16:40 np0005532761 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 23 15:18:11 np0005532761 systemd-logind[820]: New session 7 of user zuul.
Nov 23 15:18:11 np0005532761 systemd[1]: Started Session 7 of User zuul.
Nov 23 15:18:12 np0005532761 python3[30013]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:18:14 np0005532761 python3[30129]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:15 np0005532761 python3[30202]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.265106-33975-149745714688470/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:15 np0005532761 python3[30228]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:15 np0005532761 python3[30303]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.265106-33975-149745714688470/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:16 np0005532761 python3[30329]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:16 np0005532761 python3[30402]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.265106-33975-149745714688470/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:16 np0005532761 python3[30428]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:17 np0005532761 python3[30501]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.265106-33975-149745714688470/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:17 np0005532761 python3[30527]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:17 np0005532761 python3[30600]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.265106-33975-149745714688470/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:18 np0005532761 python3[30626]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:18 np0005532761 python3[30699]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.265106-33975-149745714688470/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:18 np0005532761 python3[30725]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:19 np0005532761 python3[30798]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.265106-33975-149745714688470/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:31 np0005532761 python3[30858]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:23:30 np0005532761 systemd-logind[820]: Session 7 logged out. Waiting for processes to exit.
Nov 23 15:23:30 np0005532761 systemd[1]: session-7.scope: Deactivated successfully.
Nov 23 15:23:30 np0005532761 systemd[1]: session-7.scope: Consumed 5.725s CPU time.
Nov 23 15:23:30 np0005532761 systemd-logind[820]: Removed session 7.
Nov 23 15:29:41 np0005532761 systemd-logind[820]: New session 8 of user zuul.
Nov 23 15:29:41 np0005532761 systemd[1]: Started Session 8 of User zuul.
Nov 23 15:29:42 np0005532761 python3.9[31124]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:29:43 np0005532761 python3.9[31305]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:29:51 np0005532761 systemd[1]: session-8.scope: Deactivated successfully.
Nov 23 15:29:51 np0005532761 systemd[1]: session-8.scope: Consumed 7.700s CPU time.
Nov 23 15:29:51 np0005532761 systemd-logind[820]: Session 8 logged out. Waiting for processes to exit.
Nov 23 15:29:51 np0005532761 systemd-logind[820]: Removed session 8.
Nov 23 15:30:06 np0005532761 systemd-logind[820]: New session 9 of user zuul.
Nov 23 15:30:06 np0005532761 systemd[1]: Started Session 9 of User zuul.
Nov 23 15:30:07 np0005532761 python3.9[31523]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 15:30:08 np0005532761 python3.9[31697]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:10 np0005532761 python3.9[31849]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:30:11 np0005532761 python3.9[32002]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:30:12 np0005532761 python3.9[32154]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:30:13 np0005532761 python3.9[32306]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:30:14 np0005532761 python3.9[32429]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763929812.8783903-177-18500040220632/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:30:15 np0005532761 python3.9[32581]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:16 np0005532761 python3.9[32737]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:30:16 np0005532761 python3.9[32891]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:30:17 np0005532761 python3.9[33041]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:30:23 np0005532761 python3.9[33294]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:30:23 np0005532761 python3.9[33444]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:25 np0005532761 python3.9[33598]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:26 np0005532761 python3.9[33756]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:30:27 np0005532761 python3.9[33840]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:30:32 np0005532761 irqbalance[816]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 23 15:30:32 np0005532761 irqbalance[816]: IRQ 26 affinity is now unmanaged
Nov 23 15:31:08 np0005532761 systemd[1]: Reloading.
Nov 23 15:31:08 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:31:09 np0005532761 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 23 15:31:09 np0005532761 systemd[1]: Reloading.
Nov 23 15:31:09 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:31:09 np0005532761 systemd[1]: Starting dnf makecache...
Nov 23 15:31:09 np0005532761 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 23 15:31:09 np0005532761 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 23 15:31:09 np0005532761 systemd[1]: Reloading.
Nov 23 15:31:09 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:31:09 np0005532761 dnf[34094]: Failed determining last makecache time.
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-openstack-barbican-42b4c41831408a8e323 146 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 180 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-openstack-cinder-1c00d6490d88e436f26ef 162 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-python-stevedore-c4acc5639fd2329372142 163 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-python-observabilityclient-2f31846d73c 169 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-os-net-config-bbae2ed8a159b0435a473f38 187 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 172 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-python-designate-tests-tempest-347fdbc 173 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-openstack-glance-1fd12c29b339f30fe823e 156 kB/s | 3.0 kB     00:00
Nov 23 15:31:09 np0005532761 dnf[34094]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 158 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-openstack-manila-3c01b7181572c95dac462 160 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-python-whitebox-neutron-tests-tempest- 165 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-openstack-octavia-ba397f07a7331190208c 176 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-openstack-watcher-c014f81a8647287f6dcc 164 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-python-tcib-1124124ec06aadbac34f0d340b 145 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 144 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-openstack-swift-dc98a8463506ac520c469a  98 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-python-tempestconf-8515371b7cceebd4282 138 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: delorean-openstack-heat-ui-013accbfd179753bc3f0 138 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: CentOS Stream 9 - BaseOS                         74 kB/s | 7.3 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: CentOS Stream 9 - AppStream                      74 kB/s | 7.4 kB     00:00
Nov 23 15:31:10 np0005532761 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Nov 23 15:31:10 np0005532761 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Nov 23 15:31:10 np0005532761 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Nov 23 15:31:10 np0005532761 dnf[34094]: CentOS Stream 9 - CRB                            28 kB/s | 7.2 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: CentOS Stream 9 - Extras packages                86 kB/s | 8.3 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: dlrn-antelope-testing                           149 kB/s | 3.0 kB     00:00
Nov 23 15:31:10 np0005532761 dnf[34094]: dlrn-antelope-build-deps                        136 kB/s | 3.0 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: centos9-rabbitmq                                 59 kB/s | 3.0 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: centos9-storage                                 102 kB/s | 3.0 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: centos9-opstools                                102 kB/s | 3.0 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: NFV SIG OpenvSwitch                              33 kB/s | 3.0 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: repo-setup-centos-appstream                     118 kB/s | 4.4 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: repo-setup-centos-baseos                        114 kB/s | 3.9 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: repo-setup-centos-highavailability               78 kB/s | 3.9 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: repo-setup-centos-powertools                    112 kB/s | 4.3 kB     00:00
Nov 23 15:31:11 np0005532761 dnf[34094]: Extra Packages for Enterprise Linux 9 - x86_64  283 kB/s |  33 kB     00:00
Nov 23 15:31:12 np0005532761 dnf[34094]: Metadata cache created.
Nov 23 15:31:12 np0005532761 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 15:31:12 np0005532761 systemd[1]: Finished dnf makecache.
Nov 23 15:31:12 np0005532761 systemd[1]: dnf-makecache.service: Consumed 1.684s CPU time.
Nov 23 15:32:24 np0005532761 kernel: SELinux:  Converting 2717 SID table entries...
Nov 23 15:32:24 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:32:24 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:32:24 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:32:24 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:32:24 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:32:24 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:32:24 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:32:24 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 23 15:32:24 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:32:24 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:32:24 np0005532761 systemd[1]: Reloading.
Nov 23 15:32:24 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:32:24 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:32:25 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:32:25 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:32:25 np0005532761 systemd[1]: man-db-cache-update.service: Consumed 1.057s CPU time.
Nov 23 15:32:25 np0005532761 systemd[1]: run-r910131b176d549a988d0069de8e2de2a.service: Deactivated successfully.
Nov 23 15:32:28 np0005532761 python3.9[35418]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:32:31 np0005532761 python3.9[35699]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 23 15:32:32 np0005532761 python3.9[35851]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 23 15:32:37 np0005532761 python3.9[36004]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:32:38 np0005532761 python3.9[36158]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 23 15:32:46 np0005532761 python3.9[36311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:32:47 np0005532761 python3.9[36465]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:32:47 np0005532761 python3.9[36588]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763929966.5096142-666-183335695972877/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:32:49 np0005532761 python3.9[36740]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:32:49 np0005532761 python3.9[36892]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:32:50 np0005532761 python3.9[37047]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:32:52 np0005532761 python3.9[37201]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 23 15:32:53 np0005532761 python3.9[37354]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:32:53 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:32:54 np0005532761 python3.9[37513]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 15:32:54 np0005532761 python3.9[37673]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 23 15:32:55 np0005532761 python3.9[37826]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:32:56 np0005532761 python3.9[37984]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 23 15:32:57 np0005532761 python3.9[38136]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:33:00 np0005532761 python3.9[38291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:33:01 np0005532761 python3.9[38443]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:33:01 np0005532761 python3.9[38566]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763929980.8255682-1023-84927694576867/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:33:03 np0005532761 python3.9[38718]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:33:03 np0005532761 systemd[1]: Starting Load Kernel Modules...
Nov 23 15:33:03 np0005532761 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 23 15:33:03 np0005532761 kernel: Bridge firewalling registered
Nov 23 15:33:03 np0005532761 systemd-modules-load[38722]: Inserted module 'br_netfilter'
Nov 23 15:33:03 np0005532761 systemd[1]: Finished Load Kernel Modules.
Nov 23 15:33:04 np0005532761 python3.9[38877]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:33:04 np0005532761 python3.9[39000]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763929983.610727-1092-124330063038810/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:33:06 np0005532761 python3.9[39152]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:33:09 np0005532761 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Nov 23 15:33:09 np0005532761 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Nov 23 15:33:09 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:33:09 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:33:09 np0005532761 systemd[1]: Reloading.
Nov 23 15:33:09 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:33:09 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:33:11 np0005532761 python3.9[41288]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:33:12 np0005532761 python3.9[42491]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 15:33:13 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:33:13 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:33:13 np0005532761 systemd[1]: man-db-cache-update.service: Consumed 4.293s CPU time.
Nov 23 15:33:13 np0005532761 systemd[1]: run-r89da83b8fe4143838dbad1cb6ba2056b.service: Deactivated successfully.
Nov 23 15:33:13 np0005532761 python3.9[43170]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:33:14 np0005532761 python3.9[43322]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:14 np0005532761 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 15:33:14 np0005532761 systemd[1]: Starting Authorization Manager...
Nov 23 15:33:14 np0005532761 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 15:33:14 np0005532761 polkitd[43539]: Started polkitd version 0.117
Nov 23 15:33:15 np0005532761 systemd[1]: Started Authorization Manager.
Nov 23 15:33:16 np0005532761 python3.9[43709]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:33:16 np0005532761 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 15:33:16 np0005532761 systemd[1]: tuned.service: Deactivated successfully.
Nov 23 15:33:16 np0005532761 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 15:33:16 np0005532761 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 15:33:16 np0005532761 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 15:33:17 np0005532761 python3.9[43871]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 15:33:21 np0005532761 python3.9[44023]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:33:21 np0005532761 systemd[1]: Reloading.
Nov 23 15:33:21 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:33:22 np0005532761 python3.9[44211]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:33:22 np0005532761 systemd[1]: Reloading.
Nov 23 15:33:22 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:33:24 np0005532761 python3.9[44401]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:24 np0005532761 python3.9[44554]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:24 np0005532761 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 23 15:33:25 np0005532761 python3.9[44707]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:28 np0005532761 python3.9[44869]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:29 np0005532761 python3.9[45022]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:33:29 np0005532761 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 15:33:29 np0005532761 systemd[1]: Stopped Apply Kernel Variables.
Nov 23 15:33:29 np0005532761 systemd[1]: Stopping Apply Kernel Variables...
Nov 23 15:33:29 np0005532761 systemd[1]: Starting Apply Kernel Variables...
Nov 23 15:33:29 np0005532761 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 15:33:29 np0005532761 systemd[1]: Finished Apply Kernel Variables.
Nov 23 15:33:30 np0005532761 systemd[1]: session-9.scope: Deactivated successfully.
Nov 23 15:33:30 np0005532761 systemd[1]: session-9.scope: Consumed 2min 8.458s CPU time.
Nov 23 15:33:30 np0005532761 systemd-logind[820]: Session 9 logged out. Waiting for processes to exit.
Nov 23 15:33:30 np0005532761 systemd-logind[820]: Removed session 9.
Nov 23 15:33:35 np0005532761 systemd-logind[820]: New session 10 of user zuul.
Nov 23 15:33:35 np0005532761 systemd[1]: Started Session 10 of User zuul.
Nov 23 15:33:36 np0005532761 python3.9[45207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:33:38 np0005532761 python3.9[45363]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 23 15:33:39 np0005532761 python3.9[45516]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:33:41 np0005532761 python3.9[45674]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 15:33:42 np0005532761 python3.9[45834]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:33:43 np0005532761 python3.9[45919]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:33:46 np0005532761 python3.9[46082]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:33:58 np0005532761 kernel: SELinux:  Converting 2729 SID table entries...
Nov 23 15:33:58 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:33:58 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:33:58 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:33:58 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:33:58 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:33:58 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:33:58 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:33:58 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 23 15:33:58 np0005532761 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 23 15:34:00 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:34:00 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:34:00 np0005532761 systemd[1]: Reloading.
Nov 23 15:34:01 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:01 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:01 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:34:02 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:34:02 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:34:02 np0005532761 systemd[1]: run-r1aa42aae514f426595202e7091fe1a1c.service: Deactivated successfully.
Nov 23 15:34:03 np0005532761 python3.9[47186]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:34:03 np0005532761 systemd[1]: Reloading.
Nov 23 15:34:03 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:03 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:03 np0005532761 systemd[1]: Starting Open vSwitch Database Unit...
Nov 23 15:34:03 np0005532761 chown[47228]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 23 15:34:03 np0005532761 ovs-ctl[47233]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 23 15:34:03 np0005532761 ovs-ctl[47233]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 23 15:34:03 np0005532761 ovs-ctl[47233]: Starting ovsdb-server [  OK  ]
Nov 23 15:34:03 np0005532761 ovs-vsctl[47282]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 23 15:34:04 np0005532761 ovs-vsctl[47302]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"fa015a79-13cd-4722-b3c7-7f2e111a2432\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 23 15:34:04 np0005532761 ovs-ctl[47233]: Configuring Open vSwitch system IDs [  OK  ]
Nov 23 15:34:04 np0005532761 ovs-ctl[47233]: Enabling remote OVSDB managers [  OK  ]
Nov 23 15:34:04 np0005532761 ovs-vsctl[47308]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 23 15:34:04 np0005532761 systemd[1]: Started Open vSwitch Database Unit.
Nov 23 15:34:04 np0005532761 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 23 15:34:04 np0005532761 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 23 15:34:04 np0005532761 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 23 15:34:04 np0005532761 kernel: openvswitch: Open vSwitch switching datapath
Nov 23 15:34:04 np0005532761 ovs-ctl[47352]: Inserting openvswitch module [  OK  ]
Nov 23 15:34:04 np0005532761 ovs-ctl[47321]: Starting ovs-vswitchd [  OK  ]
Nov 23 15:34:04 np0005532761 ovs-vsctl[47369]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 23 15:34:04 np0005532761 ovs-ctl[47321]: Enabling remote OVSDB managers [  OK  ]
Nov 23 15:34:04 np0005532761 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 23 15:34:04 np0005532761 systemd[1]: Starting Open vSwitch...
Nov 23 15:34:04 np0005532761 systemd[1]: Finished Open vSwitch.
Nov 23 15:34:06 np0005532761 python3.9[47521]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:34:08 np0005532761 python3.9[47675]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 23 15:34:09 np0005532761 kernel: SELinux:  Converting 2743 SID table entries...
Nov 23 15:34:09 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:34:09 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:34:09 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:34:09 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:34:09 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:34:09 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:34:09 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:34:11 np0005532761 python3.9[47832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:34:12 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 23 15:34:12 np0005532761 python3.9[47993]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:34:14 np0005532761 python3.9[48147]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:34:16 np0005532761 python3.9[48434]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:34:17 np0005532761 python3.9[48584]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:34:17 np0005532761 python3.9[48738]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:34:19 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:34:19 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:34:20 np0005532761 systemd[1]: Reloading.
Nov 23 15:34:20 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:20 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:20 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:34:20 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:34:20 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:34:20 np0005532761 systemd[1]: run-r9461b9aa63a74330922d41600e0aab83.service: Deactivated successfully.
Nov 23 15:34:21 np0005532761 python3.9[49055]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:34:21 np0005532761 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 23 15:34:21 np0005532761 systemd[1]: Stopped Network Manager Wait Online.
Nov 23 15:34:21 np0005532761 systemd[1]: Stopping Network Manager Wait Online...
Nov 23 15:34:21 np0005532761 systemd[1]: Stopping Network Manager...
Nov 23 15:34:21 np0005532761 NetworkManager[7184]: <info>  [1763930061.3580] caught SIGTERM, shutting down normally.
Nov 23 15:34:21 np0005532761 NetworkManager[7184]: <info>  [1763930061.3591] dhcp4 (eth0): canceled DHCP transaction
Nov 23 15:34:21 np0005532761 NetworkManager[7184]: <info>  [1763930061.3591] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:34:21 np0005532761 NetworkManager[7184]: <info>  [1763930061.3592] dhcp4 (eth0): state changed no lease
Nov 23 15:34:21 np0005532761 NetworkManager[7184]: <info>  [1763930061.3593] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:34:21 np0005532761 NetworkManager[7184]: <info>  [1763930061.3643] exiting (success)
Nov 23 15:34:21 np0005532761 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:34:21 np0005532761 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:34:21 np0005532761 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 23 15:34:21 np0005532761 systemd[1]: Stopped Network Manager.
Nov 23 15:34:21 np0005532761 systemd[1]: NetworkManager.service: Consumed 11.939s CPU time, 4.0M memory peak, read 0B from disk, written 11.0K to disk.
Nov 23 15:34:21 np0005532761 systemd[1]: Starting Network Manager...
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.4259] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0e13931c-c8ad-4220-a705-acddc9fc6540)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.4260] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.4314] manager[0x55b3d5938090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 15:34:21 np0005532761 systemd[1]: Starting Hostname Service...
Nov 23 15:34:21 np0005532761 systemd[1]: Started Hostname Service.
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5015] hostname: hostname: using hostnamed
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5016] hostname: static hostname changed from (none) to "compute-0"
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5020] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5024] manager[0x55b3d5938090]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5024] manager[0x55b3d5938090]: rfkill: WWAN hardware radio set enabled
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5041] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5049] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5050] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5050] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5050] manager: Networking is enabled by state file
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5052] settings: Loaded settings plugin: keyfile (internal)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5055] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5078] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5085] dhcp: init: Using DHCP client 'internal'
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5087] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5092] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5096] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5103] device (lo): Activation: starting connection 'lo' (1662c742-4425-4e5e-b9bb-1cb60d31d330)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5108] device (eth0): carrier: link connected
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5111] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5115] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5115] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5121] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5126] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5131] device (eth1): carrier: link connected
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5134] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5137] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (dd7bdc81-1cbe-5063-8ac9-0147b3ade6c0) (indicated)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5138] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5142] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5148] device (eth1): Activation: starting connection 'ci-private-network' (dd7bdc81-1cbe-5063-8ac9-0147b3ade6c0)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5153] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 15:34:21 np0005532761 systemd[1]: Started Network Manager.
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5158] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5160] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5161] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5163] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5166] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5168] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5171] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5174] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5179] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5181] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5200] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5214] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5220] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5222] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5226] device (lo): Activation: successful, device activated.
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5233] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5239] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5311] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5318] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5325] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5329] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5333] device (eth1): Activation: successful, device activated.
Nov 23 15:34:21 np0005532761 systemd[1]: Starting Network Manager Wait Online...
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5363] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5365] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5368] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5371] device (eth0): Activation: successful, device activated.
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5375] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 15:34:21 np0005532761 NetworkManager[49067]: <info>  [1763930061.5377] manager: startup complete
Nov 23 15:34:21 np0005532761 systemd[1]: Finished Network Manager Wait Online.
Nov 23 15:34:22 np0005532761 python3.9[49281]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:34:27 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:34:27 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:34:27 np0005532761 systemd[1]: Reloading.
Nov 23 15:34:27 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:27 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:27 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:34:28 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:34:28 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:34:28 np0005532761 systemd[1]: run-r4889a3559b2747a7af3160fa74e87d08.service: Deactivated successfully.
Nov 23 15:34:29 np0005532761 python3.9[49741]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:34:30 np0005532761 python3.9[49893]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:31 np0005532761 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:34:31 np0005532761 python3.9[50049]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:32 np0005532761 python3.9[50201]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:33 np0005532761 python3.9[50353]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:33 np0005532761 python3.9[50507]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:35 np0005532761 python3.9[50659]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:34:35 np0005532761 python3.9[50782]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930074.5423295-647-53169772352331/.source _original_basename=.2r8_xuvf follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:36 np0005532761 python3.9[50934]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:37 np0005532761 python3.9[51086]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 23 15:34:38 np0005532761 python3.9[51238]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:41 np0005532761 python3.9[51668]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 23 15:34:42 np0005532761 ansible-async_wrapper.py[51843]: Invoked with j783590019044 300 /home/zuul/.ansible/tmp/ansible-tmp-1763930081.4452298-845-144044517474190/AnsiballZ_edpm_os_net_config.py _
Nov 23 15:34:42 np0005532761 ansible-async_wrapper.py[51846]: Starting module and watcher
Nov 23 15:34:42 np0005532761 ansible-async_wrapper.py[51846]: Start watching 51847 (300)
Nov 23 15:34:42 np0005532761 ansible-async_wrapper.py[51847]: Start module (51847)
Nov 23 15:34:42 np0005532761 ansible-async_wrapper.py[51843]: Return async_wrapper task started.
Nov 23 15:34:42 np0005532761 python3.9[51848]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 23 15:34:43 np0005532761 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 23 15:34:43 np0005532761 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 23 15:34:43 np0005532761 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 23 15:34:43 np0005532761 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 23 15:34:43 np0005532761 kernel: cfg80211: failed to load regulatory.db
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.7539] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.7552] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.7984] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.7985] audit: op="connection-add" uuid="baa76c5c-40a5-4d5d-8d36-ace5cd26d950" name="br-ex-br" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.7998] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.7999] audit: op="connection-add" uuid="954f590d-0259-45cd-be41-05035e716ae2" name="br-ex-port" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8008] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8009] audit: op="connection-add" uuid="7bd7ad76-1e89-4adf-b288-f7aad6b054ac" name="eth1-port" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8019] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8020] audit: op="connection-add" uuid="b20dd941-2271-4534-9501-df5e56f8aeed" name="vlan20-port" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8029] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8030] audit: op="connection-add" uuid="8fc43791-a01f-4e4e-929b-6b628c039369" name="vlan21-port" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8040] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8041] audit: op="connection-add" uuid="7cda2760-dc2b-44a6-b366-b4eefebdfc9c" name="vlan22-port" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8050] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8052] audit: op="connection-add" uuid="b13edcc0-de21-4c26-92f4-67df8a059847" name="vlan23-port" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8068] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8082] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8083] audit: op="connection-add" uuid="54a28d6a-a9e8-4e43-8a3b-7da66a5b8729" name="br-ex-if" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8119] audit: op="connection-update" uuid="dd7bdc81-1cbe-5063-8ac9-0147b3ade6c0" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,ipv4.dns,ipv4.routing-rules,ipv4.never-default,ipv4.routes,ipv4.addresses,ipv4.method,ipv6.dns,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.routes,ipv6.addresses,ipv6.method,connection.controller,connection.master,connection.slave-type,connection.port-type,connection.timestamp" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8132] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8133] audit: op="connection-add" uuid="7b07922d-e953-45c8-93d8-4de5443b7549" name="vlan20-if" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8147] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8148] audit: op="connection-add" uuid="d3c493ed-d368-429b-9664-293bbe6762a2" name="vlan21-if" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8162] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8163] audit: op="connection-add" uuid="8ee5c2ba-2760-4f8f-89aa-bb9b7efbec83" name="vlan22-if" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8176] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8177] audit: op="connection-add" uuid="5815b979-cf7f-4018-9d6c-ff4a84f9ffba" name="vlan23-if" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8188] audit: op="connection-delete" uuid="02f5066b-c429-3b6f-a7c2-622cf6bd12ad" name="Wired connection 1" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8199] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8208] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8210] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (baa76c5c-40a5-4d5d-8d36-ace5cd26d950)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8211] audit: op="connection-activate" uuid="baa76c5c-40a5-4d5d-8d36-ace5cd26d950" name="br-ex-br" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8213] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8217] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8220] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (954f590d-0259-45cd-be41-05035e716ae2)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8222] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8227] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8230] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (7bd7ad76-1e89-4adf-b288-f7aad6b054ac)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8231] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8236] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8239] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b20dd941-2271-4534-9501-df5e56f8aeed)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8241] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8246] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8249] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8fc43791-a01f-4e4e-929b-6b628c039369)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8251] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8256] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8259] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (7cda2760-dc2b-44a6-b366-b4eefebdfc9c)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8260] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8265] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8269] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (b13edcc0-de21-4c26-92f4-67df8a059847)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8269] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8271] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8272] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8276] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8281] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8284] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (54a28d6a-a9e8-4e43-8a3b-7da66a5b8729)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8285] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8287] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8289] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8289] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8290] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8297] device (eth1): disconnecting for new activation request.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8298] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8300] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8301] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8301] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8303] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8306] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8309] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7b07922d-e953-45c8-93d8-4de5443b7549)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8309] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8311] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8312] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8313] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8314] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8317] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8320] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d3c493ed-d368-429b-9664-293bbe6762a2)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8321] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8323] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8324] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8324] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8326] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8329] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8332] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (8ee5c2ba-2760-4f8f-89aa-bb9b7efbec83)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8332] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8334] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8335] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8336] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8338] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8341] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8343] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (5815b979-cf7f-4018-9d6c-ff4a84f9ffba)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8344] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8345] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8347] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8347] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8349] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8360] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8362] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8365] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8367] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8372] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8376] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 kernel: ovs-system: entered promiscuous mode
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8379] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8382] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8384] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8398] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8405] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8409] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8411] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8416] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 kernel: Timeout policy base is empty
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8421] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 systemd-udevd[51855]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8425] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8427] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8433] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8437] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8441] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8444] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8451] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8456] dhcp4 (eth0): canceled DHCP transaction
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8457] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8458] dhcp4 (eth0): state changed no lease
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8460] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8471] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8476] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51849 uid=0 result="fail" reason="Device is not activated"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8519] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8523] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8570] device (eth1): disconnecting for new activation request.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8571] audit: op="connection-activate" uuid="dd7bdc81-1cbe-5063-8ac9-0147b3ade6c0" name="ci-private-network" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8573] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8594] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8603] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8617] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8617] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 23 15:34:44 np0005532761 kernel: br-ex: entered promiscuous mode
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8770] device (eth1): Activation: starting connection 'ci-private-network' (dd7bdc81-1cbe-5063-8ac9-0147b3ade6c0)
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8775] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8786] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8790] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8796] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8801] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 kernel: vlan22: entered promiscuous mode
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8817] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 systemd-udevd[51853]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8819] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8825] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8826] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8828] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8829] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8842] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8848] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8853] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8856] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8859] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8861] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8865] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8869] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8873] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8876] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8882] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8885] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 kernel: vlan21: entered promiscuous mode
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8889] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8899] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8903] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8919] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8937] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8952] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8955] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8962] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8967] device (eth1): Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8971] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8978] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8983] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.8992] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9005] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532761 kernel: vlan23: entered promiscuous mode
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9026] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9029] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9035] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9040] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9050] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9052] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9057] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 kernel: vlan20: entered promiscuous mode
Nov 23 15:34:44 np0005532761 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9133] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9144] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9161] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9162] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9167] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9208] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9220] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9236] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9239] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532761 NetworkManager[49067]: <info>  [1763930084.9244] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.0483] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.2089] checkpoint[0x55b3d590d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.2093] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 python3.9[52206]: ansible-ansible.legacy.async_status Invoked with jid=j783590019044.51843 mode=status _async_dir=/root/.ansible_async
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.4741] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.4753] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.6557] audit: op="networking-control" arg="global-dns-configuration" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.6581] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.6607] audit: op="networking-control" arg="global-dns-configuration" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.6630] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.7979] checkpoint[0x55b3d590da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 23 15:34:46 np0005532761 NetworkManager[49067]: <info>  [1763930086.7982] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51849 uid=0 result="success"
Nov 23 15:34:46 np0005532761 ansible-async_wrapper.py[51847]: Module complete (51847)
Nov 23 15:34:47 np0005532761 ansible-async_wrapper.py[51846]: Done in kid B.
Nov 23 15:34:49 np0005532761 python3.9[52312]: ansible-ansible.legacy.async_status Invoked with jid=j783590019044.51843 mode=status _async_dir=/root/.ansible_async
Nov 23 15:34:50 np0005532761 python3.9[52412]: ansible-ansible.legacy.async_status Invoked with jid=j783590019044.51843 mode=cleanup _async_dir=/root/.ansible_async
Nov 23 15:34:51 np0005532761 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:34:51 np0005532761 python3.9[52568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:34:52 np0005532761 python3.9[52691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930091.367575-926-260781117980387/.source.returncode _original_basename=.gjvyhegn follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:53 np0005532761 python3.9[52845]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:34:53 np0005532761 python3.9[52971]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930092.8910737-974-51930085031115/.source.cfg _original_basename=.t2w33qmy follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:55 np0005532761 python3.9[53123]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:34:55 np0005532761 systemd[1]: Reloading Network Manager...
Nov 23 15:34:55 np0005532761 NetworkManager[49067]: <info>  [1763930095.2254] audit: op="reload" arg="0" pid=53127 uid=0 result="success"
Nov 23 15:34:55 np0005532761 NetworkManager[49067]: <info>  [1763930095.2264] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 23 15:34:55 np0005532761 systemd[1]: Reloaded Network Manager.
Nov 23 15:34:55 np0005532761 systemd[1]: session-10.scope: Deactivated successfully.
Nov 23 15:34:55 np0005532761 systemd[1]: session-10.scope: Consumed 48.069s CPU time.
Nov 23 15:34:55 np0005532761 systemd-logind[820]: Session 10 logged out. Waiting for processes to exit.
Nov 23 15:34:55 np0005532761 systemd-logind[820]: Removed session 10.
Nov 23 15:35:01 np0005532761 systemd-logind[820]: New session 11 of user zuul.
Nov 23 15:35:01 np0005532761 systemd[1]: Started Session 11 of User zuul.
Nov 23 15:35:02 np0005532761 python3.9[53311]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:03 np0005532761 python3.9[53466]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:05 np0005532761 python3.9[53659]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:35:05 np0005532761 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:35:05 np0005532761 systemd[1]: session-11.scope: Deactivated successfully.
Nov 23 15:35:05 np0005532761 systemd[1]: session-11.scope: Consumed 2.200s CPU time.
Nov 23 15:35:05 np0005532761 systemd-logind[820]: Session 11 logged out. Waiting for processes to exit.
Nov 23 15:35:05 np0005532761 systemd-logind[820]: Removed session 11.
Nov 23 15:35:11 np0005532761 systemd-logind[820]: New session 12 of user zuul.
Nov 23 15:35:11 np0005532761 systemd[1]: Started Session 12 of User zuul.
Nov 23 15:35:12 np0005532761 python3.9[53842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:13 np0005532761 python3.9[53997]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:14 np0005532761 python3.9[54153]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:15 np0005532761 python3.9[54239]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:35:17 np0005532761 python3.9[54395]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:19 np0005532761 python3.9[54590]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:19 np0005532761 python3.9[54742]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:35:19 np0005532761 systemd[1]: var-lib-containers-storage-overlay-compat1561652280-merged.mount: Deactivated successfully.
Nov 23 15:35:19 np0005532761 podman[54743]: 2025-11-23 20:35:19.993998891 +0000 UTC m=+0.103770166 system refresh
Nov 23 15:35:20 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:35:21 np0005532761 python3.9[54906]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:21 np0005532761 python3.9[55029]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930120.3743403-197-200096658394112/.source.json follow=False _original_basename=podman_network_config.j2 checksum=95d2102c796a3a3630f371e9354a5f605997d3dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:22 np0005532761 python3.9[55181]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:23 np0005532761 python3.9[55306]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930122.0791137-242-79280402760131/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:24 np0005532761 python3.9[55458]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:24 np0005532761 python3.9[55610]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:25 np0005532761 python3.9[55762]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:26 np0005532761 python3.9[55914]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:27 np0005532761 python3.9[56066]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:35:30 np0005532761 python3.9[56219]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:31 np0005532761 python3.9[56373]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:35:32 np0005532761 python3.9[56525]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:35:32 np0005532761 python3.9[56681]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:35:33 np0005532761 python3.9[56834]: ansible-service_facts Invoked
Nov 23 15:35:33 np0005532761 network[56851]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:35:33 np0005532761 network[56852]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:35:33 np0005532761 network[56853]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:35:40 np0005532761 python3.9[57305]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:35:42 np0005532761 python3.9[57460]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 23 15:35:44 np0005532761 python3.9[57612]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:44 np0005532761 python3.9[57737]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930143.8811436-674-179242573376618/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:45 np0005532761 python3.9[57892]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:46 np0005532761 python3.9[58017]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930145.509839-719-149213408610576/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:48 np0005532761 python3.9[58172]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:50 np0005532761 python3.9[58326]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:51 np0005532761 python3.9[58410]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:35:54 np0005532761 python3.9[58566]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:54 np0005532761 python3.9[58650]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:35:54 np0005532761 chronyd[829]: chronyd exiting
Nov 23 15:35:54 np0005532761 systemd[1]: Stopping NTP client/server...
Nov 23 15:35:54 np0005532761 systemd[1]: chronyd.service: Deactivated successfully.
Nov 23 15:35:54 np0005532761 systemd[1]: Stopped NTP client/server.
Nov 23 15:35:54 np0005532761 systemd[1]: Starting NTP client/server...
Nov 23 15:35:54 np0005532761 chronyd[58658]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 15:35:54 np0005532761 chronyd[58658]: Frequency -25.941 +/- 0.140 ppm read from /var/lib/chrony/drift
Nov 23 15:35:54 np0005532761 chronyd[58658]: Loaded seccomp filter (level 2)
Nov 23 15:35:54 np0005532761 systemd[1]: Started NTP client/server.
Nov 23 15:35:55 np0005532761 systemd-logind[820]: Session 12 logged out. Waiting for processes to exit.
Nov 23 15:35:55 np0005532761 systemd[1]: session-12.scope: Deactivated successfully.
Nov 23 15:35:55 np0005532761 systemd[1]: session-12.scope: Consumed 24.297s CPU time.
Nov 23 15:35:55 np0005532761 systemd-logind[820]: Removed session 12.
Nov 23 15:36:01 np0005532761 systemd-logind[820]: New session 13 of user zuul.
Nov 23 15:36:01 np0005532761 systemd[1]: Started Session 13 of User zuul.
Nov 23 15:36:01 np0005532761 python3.9[58839]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:02 np0005532761 python3.9[58991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:03 np0005532761 python3.9[59114]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930162.270093-62-132619181439427/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:04 np0005532761 systemd[1]: session-13.scope: Deactivated successfully.
Nov 23 15:36:04 np0005532761 systemd[1]: session-13.scope: Consumed 1.523s CPU time.
Nov 23 15:36:04 np0005532761 systemd-logind[820]: Session 13 logged out. Waiting for processes to exit.
Nov 23 15:36:04 np0005532761 systemd-logind[820]: Removed session 13.
Nov 23 15:36:09 np0005532761 systemd-logind[820]: New session 14 of user zuul.
Nov 23 15:36:09 np0005532761 systemd[1]: Started Session 14 of User zuul.
Nov 23 15:36:10 np0005532761 python3.9[59292]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:36:11 np0005532761 python3.9[59448]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:12 np0005532761 python3.9[59623]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:13 np0005532761 python3.9[59746]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763930171.940233-83-162093332778799/.source.json _original_basename=.txhfjnak follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:14 np0005532761 python3.9[59900]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:15 np0005532761 python3.9[60023]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930174.0033486-152-161885895618352/.source _original_basename=.3y7f_wub follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:16 np0005532761 python3.9[60175]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:36:16 np0005532761 python3.9[60327]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:17 np0005532761 python3.9[60450]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930176.3346481-224-31128509311164/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:36:17 np0005532761 python3.9[60602]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:18 np0005532761 python3.9[60725]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930177.5224485-224-167837748686350/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:36:19 np0005532761 python3.9[60877]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:20 np0005532761 python3.9[61029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:21 np0005532761 python3.9[61152]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930180.1061966-335-84222247561010/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:22 np0005532761 python3.9[61304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:22 np0005532761 python3.9[61427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930181.6496885-380-129609264593960/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:24 np0005532761 python3.9[61579]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:24 np0005532761 systemd[1]: Reloading.
Nov 23 15:36:24 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:24 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:24 np0005532761 systemd[1]: Reloading.
Nov 23 15:36:24 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:24 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:24 np0005532761 systemd[1]: Starting EDPM Container Shutdown...
Nov 23 15:36:24 np0005532761 systemd[1]: Finished EDPM Container Shutdown.
Nov 23 15:36:25 np0005532761 python3.9[61806]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:26 np0005532761 python3.9[61929]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930184.9730444-449-189356288909272/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:27 np0005532761 python3.9[62081]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:27 np0005532761 python3.9[62204]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930186.6033912-494-88935457947510/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:28 np0005532761 python3.9[62356]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:28 np0005532761 systemd[1]: Reloading.
Nov 23 15:36:28 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:28 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:28 np0005532761 systemd[1]: Reloading.
Nov 23 15:36:28 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:28 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:29 np0005532761 systemd[1]: Starting Create netns directory...
Nov 23 15:36:29 np0005532761 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:36:29 np0005532761 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:36:29 np0005532761 systemd[1]: Finished Create netns directory.
Nov 23 15:36:30 np0005532761 python3.9[62581]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:36:30 np0005532761 network[62598]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:36:30 np0005532761 network[62599]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:36:30 np0005532761 network[62600]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:36:36 np0005532761 python3.9[62864]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:36 np0005532761 systemd[1]: Reloading.
Nov 23 15:36:36 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:36 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:36 np0005532761 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 23 15:36:36 np0005532761 iptables.init[62905]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 23 15:36:36 np0005532761 iptables.init[62905]: iptables: Flushing firewall rules: [  OK  ]
Nov 23 15:36:36 np0005532761 systemd[1]: iptables.service: Deactivated successfully.
Nov 23 15:36:36 np0005532761 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 23 15:36:37 np0005532761 python3.9[63104]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:38 np0005532761 python3.9[63258]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:39 np0005532761 systemd[1]: Reloading.
Nov 23 15:36:39 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:39 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:39 np0005532761 systemd[1]: Starting Netfilter Tables...
Nov 23 15:36:39 np0005532761 systemd[1]: Finished Netfilter Tables.
Nov 23 15:36:40 np0005532761 python3.9[63449]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:36:41 np0005532761 python3.9[63602]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:42 np0005532761 python3.9[63727]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930201.1348238-701-248312182873460/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:43 np0005532761 python3.9[63880]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:36:43 np0005532761 systemd[1]: Reloading OpenSSH server daemon...
Nov 23 15:36:43 np0005532761 systemd[1]: Reloaded OpenSSH server daemon.
Nov 23 15:36:44 np0005532761 python3.9[64036]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:44 np0005532761 python3.9[64188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:45 np0005532761 python3.9[64311]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930204.4612284-794-74113618351761/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:46 np0005532761 python3.9[64463]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 15:36:46 np0005532761 systemd[1]: Starting Time & Date Service...
Nov 23 15:36:46 np0005532761 systemd[1]: Started Time & Date Service.
Nov 23 15:36:47 np0005532761 python3.9[64619]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:48 np0005532761 python3.9[64771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:49 np0005532761 python3.9[64894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930208.2787735-899-194371660513584/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:50 np0005532761 python3.9[65046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:50 np0005532761 python3.9[65169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930209.8558304-944-202667134182677/.source.yaml _original_basename=.kz7o9y6f follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:51 np0005532761 python3.9[65321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:52 np0005532761 python3.9[65446]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930211.3770378-989-160629405861669/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:53 np0005532761 python3.9[65602]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:36:54 np0005532761 python3.9[65755]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:36:55 np0005532761 python3[65908]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:36:56 np0005532761 python3.9[66062]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:56 np0005532761 python3.9[66185]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930215.6424758-1106-273476018431680/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:57 np0005532761 python3.9[66337]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:58 np0005532761 python3.9[66460]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930217.1127574-1151-266681943550337/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:59 np0005532761 python3.9[66612]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:59 np0005532761 python3.9[66735]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930218.6998155-1196-153022907508883/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:00 np0005532761 python3.9[66887]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:37:01 np0005532761 python3.9[67010]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930220.3623416-1241-50856075969102/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:02 np0005532761 python3.9[67162]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:37:03 np0005532761 python3.9[67285]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930221.9914196-1286-37772807544278/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:04 np0005532761 python3.9[67437]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:05 np0005532761 python3.9[67589]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:06 np0005532761 python3.9[67748]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:07 np0005532761 python3.9[67901]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:08 np0005532761 python3.9[68054]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:09 np0005532761 python3.9[68206]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:37:10 np0005532761 python3.9[68361]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:37:10 np0005532761 systemd[1]: session-14.scope: Deactivated successfully.
Nov 23 15:37:10 np0005532761 systemd[1]: session-14.scope: Consumed 32.836s CPU time.
Nov 23 15:37:10 np0005532761 systemd-logind[820]: Session 14 logged out. Waiting for processes to exit.
Nov 23 15:37:10 np0005532761 systemd-logind[820]: Removed session 14.
Nov 23 15:37:15 np0005532761 systemd-logind[820]: New session 15 of user zuul.
Nov 23 15:37:15 np0005532761 systemd[1]: Started Session 15 of User zuul.
Nov 23 15:37:16 np0005532761 python3.9[68544]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 15:37:17 np0005532761 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 15:37:17 np0005532761 python3.9[68698]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:18 np0005532761 python3.9[68850]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:37:19 np0005532761 python3.9[69002]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZyfELJX7KkP8E4Yo+r9guKNy64TSJDfB+rBUAclCyKwGxjxhBTRAJJCOL6kSBIkbUub9LTNVh+s271jrKlK1rYs22c1DFe3ci9hBERauX4lIaBHw9kJBHURb9cB+VbonXf0hAdqGDLTXdqFnbed2oU0ngSuVesO/C9+SCSZFsfERuUe3/SXKbWfjehgYTi4GquXo6Ynq1HopME6mRR8qGsv6sgdkxpSaUiwtSBG5ONOSyzrev1t2hdDsRxvbZAZgV2ab6IMD9DTKaIXphHpumL6txas+nKViUfm+gW6p6EKNdHb/VLha7ghY3p4LE3OdXM4eytxszF0Fzs/0CXzafNxHjVjHzqxrJBi/PT22i6QD60NTimabHulw8IkZG6KsuNVq1rmlSSGQGjqAs7l6hNH8kF4uq1JwOl6mVgct5iE+ZzhfO5WRWShiE1LlCZpqdYE9VqmBrK5r70N0srW3h2mb4lTAwvC089Vert64D29M7riepyGCrGInpE4aK7Sk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFop+sR8mOkxOfCCMKg8Voa+6Ns0zHMRLKg+WdnL56v#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ0Rj0/OjRh0AQLkOX0VueFFf3xD5FqSzewSN/8R0Xh0Ybf7bkNUGszKaTkKSUBKR2e9V/GwA+BxEChWtzU3sY=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfRiqah4FSYlin2mt3PYchMDfWNjxPXqcCCW7iymA93OXZ1reX9dxsJRSssuxIkwaYv7OC+wrUmMOsDhULhy9uNDku8TnHodZVNms8z3UwQW2GPePqEdQ56rKSJ5DhpY0ly7PapOQ69jitmBGQjsu8go19hV3djXlFm1du9V1HMnfGqyr5REZ5ACjW2Rr0108gdYgrt/xh+1sl7cgixK0vUKaqN47/VJHXSTk20aXknt5lhurSKMbRD4cgP1pz0lBJ8LfEvFajLlXBk7MtsI8L94qtHH20hWUk8P2FmqsM4LoLIY4YkAT6kzDPkNdC5F3bpl67NzNXKLdStChVsjRVgrsR0JhU4YO8nYPSqn85KWQUMsuQhXfeMPb5a0n4vSmF0hQhaTctIIK5Yq+qK3S5Ee0tV+ZLMcrYiRfVJYjULh+8LazeUYBtZAVkOoenlHNpcxfVl2v8Fx37PYu6wY/1Ol7i+Fyg+DMculPNu0E00hYIfuSPW06sm98V0zJ7bs=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0+oolG6Djq6MTp/HXh3SEc2a8aDRu5q8AnCiNHx/fN#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1GCZqvti/wHDh2Oo7NSAFToY/dykBAXL2bgJmg9kqKO2qTzfIYtCRiGP/x9yaw+D3ymaftMgdHgFkzRtYcXz0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3+sqhh74Wal6wWv19BRNHNnjTPYKculYCUftHSfYmbg5LryLTnsWAJdalXVBYQIJtq5uFrJRBG4C0R1XMU/MT4ZxuTtafwAzeTnKoCHbN/+mH31bndpvGKYRQ9AQHmamquyDQaSEjIYKFaK6eM7uVV/PaSZqasrB6awv3MeDH/GhtlyJwY7ble8M3UtG9jMWuPq/qX+TnKCZI3COyKBCe7F3aeaIewsho+T7qsRd8UNr55SHWJ1N6xYtA4FUayJ4cCZUeo4+SOJuQWb6A3HZm75y0LpdLDFH54DqyDqKVvDUfaKJJQV++3GT9kF9+jrwJDEK9VslSlEylLZ0zg1J0Z2zyMOwOAxBKEUXQNymC+00ybwJd4trP7KDy6+ZGOtHEThBgVO6vtuxQLWhseNa3otNXh7cHTf+Jfo7uo1wHbasd6aD1AVxvt4yKgOGy1ypt9Ps/COlbfHHFYZsI5gVLyJyK8aeipUjJUe6u6Qlf/F/inV1rwRBg8li7oeW7Ss=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE96kcIFDgsK09K4ZL9HihPRGUmf4YDgXlXqtYy0M8r#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJoWf98fFp9mmY0S22K7n+FjL7cDYCGLm8eglORId7ZBFp9PG5e8P+ws6VWjBbceNazmskqBYurrlrsvB4Mu40E=#012 create=True mode=0644 path=/tmp/ansible.3_xe85o2 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:20 np0005532761 python3.9[69154]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3_xe85o2' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:21 np0005532761 python3.9[69308]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3_xe85o2 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:22 np0005532761 systemd[1]: session-15.scope: Deactivated successfully.
Nov 23 15:37:22 np0005532761 systemd[1]: session-15.scope: Consumed 3.771s CPU time.
Nov 23 15:37:22 np0005532761 systemd-logind[820]: Session 15 logged out. Waiting for processes to exit.
Nov 23 15:37:22 np0005532761 systemd-logind[820]: Removed session 15.
Nov 23 15:37:27 np0005532761 systemd-logind[820]: New session 16 of user zuul.
Nov 23 15:37:27 np0005532761 systemd[1]: Started Session 16 of User zuul.
Nov 23 15:37:28 np0005532761 python3.9[69486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:37:30 np0005532761 python3.9[69642]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 15:37:31 np0005532761 python3.9[69797]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:37:32 np0005532761 python3.9[69951]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:33 np0005532761 python3.9[70106]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:34 np0005532761 python3.9[70260]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:35 np0005532761 python3.9[70415]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:35 np0005532761 systemd[1]: session-16.scope: Deactivated successfully.
Nov 23 15:37:35 np0005532761 systemd[1]: session-16.scope: Consumed 4.314s CPU time.
Nov 23 15:37:35 np0005532761 systemd-logind[820]: Session 16 logged out. Waiting for processes to exit.
Nov 23 15:37:35 np0005532761 systemd-logind[820]: Removed session 16.
Nov 23 15:37:41 np0005532761 systemd-logind[820]: New session 17 of user zuul.
Nov 23 15:37:41 np0005532761 systemd[1]: Started Session 17 of User zuul.
Nov 23 15:37:42 np0005532761 python3.9[70593]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:37:43 np0005532761 python3.9[70749]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:37:44 np0005532761 python3.9[70833]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:37:46 np0005532761 python3.9[70984]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:47 np0005532761 python3.9[71136]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:37:48 np0005532761 python3.9[71286]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:48 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:37:48 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:37:49 np0005532761 python3.9[71437]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:49 np0005532761 systemd[1]: session-17.scope: Deactivated successfully.
Nov 23 15:37:49 np0005532761 systemd[1]: session-17.scope: Consumed 5.670s CPU time.
Nov 23 15:37:49 np0005532761 systemd-logind[820]: Session 17 logged out. Waiting for processes to exit.
Nov 23 15:37:49 np0005532761 systemd-logind[820]: Removed session 17.
Nov 23 15:37:58 np0005532761 systemd-logind[820]: New session 18 of user zuul.
Nov 23 15:37:58 np0005532761 systemd[1]: Started Session 18 of User zuul.
Nov 23 15:38:04 np0005532761 chronyd[58658]: Selected source 174.138.193.90 (pool.ntp.org)
Nov 23 15:38:04 np0005532761 python3[72211]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:38:06 np0005532761 python3[72306]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 15:38:08 np0005532761 python3[72333]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:38:08 np0005532761 python3[72359]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:38:08 np0005532761 kernel: loop: module loaded
Nov 23 15:38:08 np0005532761 kernel: loop3: detected capacity change from 0 to 41943040
Nov 23 15:38:09 np0005532761 python3[72394]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:38:09 np0005532761 lvm[72397]: PV /dev/loop3 not used.
Nov 23 15:38:09 np0005532761 lvm[72406]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:38:09 np0005532761 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 23 15:38:09 np0005532761 lvm[72408]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 23 15:38:09 np0005532761 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 23 15:38:10 np0005532761 python3[72486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:38:10 np0005532761 python3[72559]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930289.8976786-36960-121342317491809/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:38:11 np0005532761 python3[72609]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:38:11 np0005532761 systemd[1]: Reloading.
Nov 23 15:38:11 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:38:11 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:38:11 np0005532761 systemd[1]: Starting Ceph OSD losetup...
Nov 23 15:38:11 np0005532761 bash[72649]: /dev/loop3: [64513]:4194933 (/var/lib/ceph-osd-0.img)
Nov 23 15:38:11 np0005532761 systemd[1]: Finished Ceph OSD losetup.
Nov 23 15:38:11 np0005532761 lvm[72650]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:38:11 np0005532761 lvm[72650]: VG ceph_vg0 finished
Nov 23 15:38:14 np0005532761 python3[72674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:38:16 np0005532761 python3[72769]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 15:38:19 np0005532761 python3[72829]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 15:38:21 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:38:21 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:38:22 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:38:22 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:38:22 np0005532761 systemd[1]: run-r34b6a85caf9643b5956d9059dddad9b2.service: Deactivated successfully.
Nov 23 15:38:22 np0005532761 python3[72944]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:38:22 np0005532761 python3[72972]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:38:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:23 np0005532761 python3[73035]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:38:23 np0005532761 python3[73061]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:38:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:24 np0005532761 python3[73139]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:38:24 np0005532761 python3[73212]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930304.1685472-37154-209555023047018/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:38:25 np0005532761 python3[73314]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:38:25 np0005532761 python3[73387]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930305.1888928-37172-26083950734409/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:38:26 np0005532761 python3[73437]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:38:26 np0005532761 python3[73465]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:38:26 np0005532761 python3[73493]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:38:27 np0005532761 python3[73521]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:38:27 np0005532761 systemd-logind[820]: New session 19 of user ceph-admin.
Nov 23 15:38:27 np0005532761 systemd[1]: Created slice User Slice of UID 42477.
Nov 23 15:38:27 np0005532761 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 23 15:38:27 np0005532761 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 23 15:38:27 np0005532761 systemd[1]: Starting User Manager for UID 42477...
Nov 23 15:38:27 np0005532761 systemd[73529]: Queued start job for default target Main User Target.
Nov 23 15:38:27 np0005532761 systemd[73529]: Created slice User Application Slice.
Nov 23 15:38:27 np0005532761 systemd[73529]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:38:27 np0005532761 systemd[73529]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:38:27 np0005532761 systemd[73529]: Reached target Paths.
Nov 23 15:38:27 np0005532761 systemd[73529]: Reached target Timers.
Nov 23 15:38:27 np0005532761 systemd[73529]: Starting D-Bus User Message Bus Socket...
Nov 23 15:38:27 np0005532761 systemd[73529]: Starting Create User's Volatile Files and Directories...
Nov 23 15:38:27 np0005532761 systemd[73529]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:38:27 np0005532761 systemd[73529]: Reached target Sockets.
Nov 23 15:38:27 np0005532761 systemd[73529]: Finished Create User's Volatile Files and Directories.
Nov 23 15:38:27 np0005532761 systemd[73529]: Reached target Basic System.
Nov 23 15:38:27 np0005532761 systemd[73529]: Reached target Main User Target.
Nov 23 15:38:27 np0005532761 systemd[73529]: Startup finished in 106ms.
Nov 23 15:38:27 np0005532761 systemd[1]: Started User Manager for UID 42477.
Nov 23 15:38:27 np0005532761 systemd[1]: Started Session 19 of User ceph-admin.
Nov 23 15:38:27 np0005532761 systemd-logind[820]: Session 19 logged out. Waiting for processes to exit.
Nov 23 15:38:27 np0005532761 systemd[1]: session-19.scope: Deactivated successfully.
Nov 23 15:38:27 np0005532761 systemd-logind[820]: Removed session 19.
Nov 23 15:38:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-compat2095534144-lower\x2dmapped.mount: Deactivated successfully.
Nov 23 15:38:37 np0005532761 systemd[1]: Stopping User Manager for UID 42477...
Nov 23 15:38:37 np0005532761 systemd[73529]: Activating special unit Exit the Session...
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped target Main User Target.
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped target Basic System.
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped target Paths.
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped target Sockets.
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped target Timers.
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 15:38:37 np0005532761 systemd[73529]: Closed D-Bus User Message Bus Socket.
Nov 23 15:38:37 np0005532761 systemd[73529]: Stopped Create User's Volatile Files and Directories.
Nov 23 15:38:37 np0005532761 systemd[73529]: Removed slice User Application Slice.
Nov 23 15:38:37 np0005532761 systemd[73529]: Reached target Shutdown.
Nov 23 15:38:37 np0005532761 systemd[73529]: Finished Exit the Session.
Nov 23 15:38:37 np0005532761 systemd[73529]: Reached target Exit the Session.
Nov 23 15:38:37 np0005532761 systemd[1]: user@42477.service: Deactivated successfully.
Nov 23 15:38:37 np0005532761 systemd[1]: Stopped User Manager for UID 42477.
Nov 23 15:38:37 np0005532761 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 23 15:38:37 np0005532761 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 23 15:38:37 np0005532761 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 23 15:38:37 np0005532761 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 23 15:38:37 np0005532761 systemd[1]: Removed slice User Slice of UID 42477.
Nov 23 15:38:53 np0005532761 podman[73623]: 2025-11-23 20:38:53.075646901 +0000 UTC m=+25.172573425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73689]: 2025-11-23 20:38:53.15137935 +0000 UTC m=+0.053812976 container create c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066 (image=quay.io/ceph/ceph:v19, name=happy_aryabhata, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:38:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1429155471-merged.mount: Deactivated successfully.
Nov 23 15:38:53 np0005532761 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 23 15:38:53 np0005532761 systemd[1]: Started libpod-conmon-c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066.scope.
Nov 23 15:38:53 np0005532761 podman[73689]: 2025-11-23 20:38:53.123210259 +0000 UTC m=+0.025643975 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:53 np0005532761 podman[73689]: 2025-11-23 20:38:53.259909032 +0000 UTC m=+0.162342678 container init c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066 (image=quay.io/ceph/ceph:v19, name=happy_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:53 np0005532761 podman[73689]: 2025-11-23 20:38:53.267167065 +0000 UTC m=+0.169600691 container start c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066 (image=quay.io/ceph/ceph:v19, name=happy_aryabhata, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 15:38:53 np0005532761 podman[73689]: 2025-11-23 20:38:53.270727961 +0000 UTC m=+0.173161587 container attach c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066 (image=quay.io/ceph/ceph:v19, name=happy_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 15:38:53 np0005532761 happy_aryabhata[73705]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066.scope: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73689]: 2025-11-23 20:38:53.365705522 +0000 UTC m=+0.268139148 container died c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066 (image=quay.io/ceph/ceph:v19, name=happy_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:38:53 np0005532761 podman[73689]: 2025-11-23 20:38:53.403964612 +0000 UTC m=+0.306398238 container remove c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066 (image=quay.io/ceph/ceph:v19, name=happy_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-conmon-c3b384b883b6ac18bef804968993acb546c85d971c519a2e10d6a3dc1a81d066.scope: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73723]: 2025-11-23 20:38:53.484799477 +0000 UTC m=+0.049577064 container create f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1 (image=quay.io/ceph/ceph:v19, name=objective_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:38:53 np0005532761 systemd[1]: Started libpod-conmon-f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1.scope.
Nov 23 15:38:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:53 np0005532761 podman[73723]: 2025-11-23 20:38:53.550447396 +0000 UTC m=+0.115225003 container init f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1 (image=quay.io/ceph/ceph:v19, name=objective_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:38:53 np0005532761 podman[73723]: 2025-11-23 20:38:53.556189939 +0000 UTC m=+0.120967516 container start f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1 (image=quay.io/ceph/ceph:v19, name=objective_mendel, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:38:53 np0005532761 objective_mendel[73739]: 167 167
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1.scope: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73723]: 2025-11-23 20:38:53.466996162 +0000 UTC m=+0.031773759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:53 np0005532761 podman[73723]: 2025-11-23 20:38:53.562513367 +0000 UTC m=+0.127290944 container attach f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1 (image=quay.io/ceph/ceph:v19, name=objective_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:38:53 np0005532761 podman[73723]: 2025-11-23 20:38:53.562783625 +0000 UTC m=+0.127561212 container died f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1 (image=quay.io/ceph/ceph:v19, name=objective_mendel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:38:53 np0005532761 podman[73723]: 2025-11-23 20:38:53.597294195 +0000 UTC m=+0.162071772 container remove f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1 (image=quay.io/ceph/ceph:v19, name=objective_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-conmon-f84af0fb3602e9917d6fcdf541609d00804cb4f73510a25b81d587a1974f7fb1.scope: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73756]: 2025-11-23 20:38:53.659654297 +0000 UTC m=+0.041304552 container create 04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741 (image=quay.io/ceph/ceph:v19, name=heuristic_ritchie, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:38:53 np0005532761 systemd[1]: Started libpod-conmon-04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741.scope.
Nov 23 15:38:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:53 np0005532761 podman[73756]: 2025-11-23 20:38:53.715536087 +0000 UTC m=+0.097186382 container init 04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741 (image=quay.io/ceph/ceph:v19, name=heuristic_ritchie, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:38:53 np0005532761 podman[73756]: 2025-11-23 20:38:53.720093938 +0000 UTC m=+0.101744203 container start 04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741 (image=quay.io/ceph/ceph:v19, name=heuristic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:53 np0005532761 podman[73756]: 2025-11-23 20:38:53.724038203 +0000 UTC m=+0.105688448 container attach 04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741 (image=quay.io/ceph/ceph:v19, name=heuristic_ritchie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 15:38:53 np0005532761 podman[73756]: 2025-11-23 20:38:53.640740313 +0000 UTC m=+0.022390618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:53 np0005532761 heuristic_ritchie[73773]: AQDdcCNpuxzwKxAAqgeGti6ILsDjgI5PU2ZeZw==
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741.scope: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73756]: 2025-11-23 20:38:53.740270435 +0000 UTC m=+0.121920680 container died 04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741 (image=quay.io/ceph/ceph:v19, name=heuristic_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:38:53 np0005532761 podman[73756]: 2025-11-23 20:38:53.76894159 +0000 UTC m=+0.150591835 container remove 04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741 (image=quay.io/ceph/ceph:v19, name=heuristic_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-conmon-04be3b555d8ebe9b9951c83e146b6e3a0ba6e440d98d876f388504e718813741.scope: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73791]: 2025-11-23 20:38:53.838642938 +0000 UTC m=+0.049503661 container create ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9 (image=quay.io/ceph/ceph:v19, name=fervent_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:53 np0005532761 systemd[1]: Started libpod-conmon-ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9.scope.
Nov 23 15:38:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:53 np0005532761 podman[73791]: 2025-11-23 20:38:53.894721963 +0000 UTC m=+0.105582696 container init ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9 (image=quay.io/ceph/ceph:v19, name=fervent_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:38:53 np0005532761 podman[73791]: 2025-11-23 20:38:53.899641103 +0000 UTC m=+0.110501826 container start ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9 (image=quay.io/ceph/ceph:v19, name=fervent_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:38:53 np0005532761 podman[73791]: 2025-11-23 20:38:53.903601399 +0000 UTC m=+0.114462142 container attach ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9 (image=quay.io/ceph/ceph:v19, name=fervent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:53 np0005532761 podman[73791]: 2025-11-23 20:38:53.815568123 +0000 UTC m=+0.026428896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:53 np0005532761 fervent_joliot[73808]: AQDdcCNp/A6xNhAAmpvon0k7W03iHGez5e5cHA==
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9.scope: Deactivated successfully.
Nov 23 15:38:53 np0005532761 podman[73791]: 2025-11-23 20:38:53.921192399 +0000 UTC m=+0.132053122 container died ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9 (image=quay.io/ceph/ceph:v19, name=fervent_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:38:53 np0005532761 podman[73791]: 2025-11-23 20:38:53.965830727 +0000 UTC m=+0.176691450 container remove ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9 (image=quay.io/ceph/ceph:v19, name=fervent_joliot, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:38:53 np0005532761 systemd[1]: libpod-conmon-ed78439994a58344ef50030ce8bed8239b062ae2bd1c3a44db444ef56765d6b9.scope: Deactivated successfully.
Nov 23 15:38:54 np0005532761 podman[73828]: 2025-11-23 20:38:54.032921216 +0000 UTC m=+0.045439392 container create 94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be (image=quay.io/ceph/ceph:v19, name=modest_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 15:38:54 np0005532761 systemd[1]: Started libpod-conmon-94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be.scope.
Nov 23 15:38:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-22138ebd00599ae43178309ef16e0ee1f347e1eaf49ca9f168ac10bb11969636-merged.mount: Deactivated successfully.
Nov 23 15:38:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:54 np0005532761 podman[73828]: 2025-11-23 20:38:54.012499212 +0000 UTC m=+0.025017408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:54 np0005532761 podman[73828]: 2025-11-23 20:38:54.109434136 +0000 UTC m=+0.121952342 container init 94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be (image=quay.io/ceph/ceph:v19, name=modest_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:38:54 np0005532761 podman[73828]: 2025-11-23 20:38:54.116557645 +0000 UTC m=+0.129075821 container start 94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be (image=quay.io/ceph/ceph:v19, name=modest_bouman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:54 np0005532761 podman[73828]: 2025-11-23 20:38:54.12010196 +0000 UTC m=+0.132620166 container attach 94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be (image=quay.io/ceph/ceph:v19, name=modest_bouman, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 15:38:54 np0005532761 modest_bouman[73846]: AQDecCNpy0WLCBAAucBJXmJUoYXL7S0qRC9iWw==
Nov 23 15:38:54 np0005532761 systemd[1]: libpod-94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be.scope: Deactivated successfully.
Nov 23 15:38:54 np0005532761 podman[73828]: 2025-11-23 20:38:54.148393164 +0000 UTC m=+0.160911340 container died 94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be (image=quay.io/ceph/ceph:v19, name=modest_bouman, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:38:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-64531591deb262b9b467417abc28bbc5fca924a3a3ef0f6391723ac63e0b1e69-merged.mount: Deactivated successfully.
Nov 23 15:38:54 np0005532761 podman[73828]: 2025-11-23 20:38:54.192431127 +0000 UTC m=+0.204949313 container remove 94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be (image=quay.io/ceph/ceph:v19, name=modest_bouman, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:38:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:54 np0005532761 systemd[1]: libpod-conmon-94d26ab52b667ba26a154ed107332727802f7fe9e99c182a836eaeed7909a8be.scope: Deactivated successfully.
Nov 23 15:38:54 np0005532761 podman[73865]: 2025-11-23 20:38:54.277539086 +0000 UTC m=+0.054793741 container create 0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2 (image=quay.io/ceph/ceph:v19, name=relaxed_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:38:54 np0005532761 systemd[1]: Started libpod-conmon-0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2.scope.
Nov 23 15:38:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1be5c6d451224818fe811a8ba167fd57906f28a84db1cee5eccf0e85a76ebfb/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:54 np0005532761 podman[73865]: 2025-11-23 20:38:54.341277816 +0000 UTC m=+0.118532561 container init 0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2 (image=quay.io/ceph/ceph:v19, name=relaxed_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:38:54 np0005532761 podman[73865]: 2025-11-23 20:38:54.254871082 +0000 UTC m=+0.032125797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:54 np0005532761 podman[73865]: 2025-11-23 20:38:54.346994498 +0000 UTC m=+0.124249173 container start 0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2 (image=quay.io/ceph/ceph:v19, name=relaxed_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:38:54 np0005532761 podman[73865]: 2025-11-23 20:38:54.350652055 +0000 UTC m=+0.127906750 container attach 0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2 (image=quay.io/ceph/ceph:v19, name=relaxed_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 15:38:54 np0005532761 relaxed_brown[73881]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 23 15:38:54 np0005532761 relaxed_brown[73881]: setting min_mon_release = quincy
Nov 23 15:38:54 np0005532761 relaxed_brown[73881]: /usr/bin/monmaptool: set fsid to 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:54 np0005532761 relaxed_brown[73881]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 23 15:38:54 np0005532761 systemd[1]: libpod-0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2.scope: Deactivated successfully.
Nov 23 15:38:54 np0005532761 podman[73865]: 2025-11-23 20:38:54.375704973 +0000 UTC m=+0.152959668 container died 0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2 (image=quay.io/ceph/ceph:v19, name=relaxed_brown, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Nov 23 15:38:54 np0005532761 podman[73865]: 2025-11-23 20:38:54.421518514 +0000 UTC m=+0.198773179 container remove 0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2 (image=quay.io/ceph/ceph:v19, name=relaxed_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:38:54 np0005532761 systemd[1]: libpod-conmon-0d498badcacb02a2007b159535f28d9bc021e121063acafa76e928539f3ca1c2.scope: Deactivated successfully.
Nov 23 15:38:54 np0005532761 podman[73900]: 2025-11-23 20:38:54.496553184 +0000 UTC m=+0.047729534 container create aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b (image=quay.io/ceph/ceph:v19, name=condescending_kalam, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:38:54 np0005532761 systemd[1]: Started libpod-conmon-aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b.scope.
Nov 23 15:38:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fa4da38e39e2e81af0cf087e52684740d20162801219b8adf6f014dc4995fd/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fa4da38e39e2e81af0cf087e52684740d20162801219b8adf6f014dc4995fd/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:54 np0005532761 podman[73900]: 2025-11-23 20:38:54.472545294 +0000 UTC m=+0.023721624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fa4da38e39e2e81af0cf087e52684740d20162801219b8adf6f014dc4995fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fa4da38e39e2e81af0cf087e52684740d20162801219b8adf6f014dc4995fd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:54 np0005532761 podman[73900]: 2025-11-23 20:38:54.586271846 +0000 UTC m=+0.137448246 container init aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b (image=quay.io/ceph/ceph:v19, name=condescending_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:54 np0005532761 podman[73900]: 2025-11-23 20:38:54.592306886 +0000 UTC m=+0.143483206 container start aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b (image=quay.io/ceph/ceph:v19, name=condescending_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:38:54 np0005532761 podman[73900]: 2025-11-23 20:38:54.596123708 +0000 UTC m=+0.147300018 container attach aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b (image=quay.io/ceph/ceph:v19, name=condescending_kalam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 15:38:54 np0005532761 systemd[1]: libpod-aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b.scope: Deactivated successfully.
Nov 23 15:38:54 np0005532761 podman[73900]: 2025-11-23 20:38:54.68061476 +0000 UTC m=+0.231791120 container died aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b (image=quay.io/ceph/ceph:v19, name=condescending_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:54 np0005532761 podman[73900]: 2025-11-23 20:38:54.727675505 +0000 UTC m=+0.278851815 container remove aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b (image=quay.io/ceph/ceph:v19, name=condescending_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 15:38:54 np0005532761 systemd[1]: libpod-conmon-aa978800eaac57a920943e23dc350246f7c009c889a5ebe28814ba43db1f253b.scope: Deactivated successfully.
Nov 23 15:38:54 np0005532761 systemd[1]: Reloading.
Nov 23 15:38:54 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:38:54 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:38:55 np0005532761 systemd[1]: Reloading.
Nov 23 15:38:55 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:38:55 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:38:55 np0005532761 systemd[1]: Reached target All Ceph clusters and services.
Nov 23 15:38:55 np0005532761 systemd[1]: Reloading.
Nov 23 15:38:55 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:38:55 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:38:55 np0005532761 systemd[1]: Reached target Ceph cluster 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:38:55 np0005532761 systemd[1]: Reloading.
Nov 23 15:38:55 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:38:55 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:38:55 np0005532761 systemd[1]: Reloading.
Nov 23 15:38:55 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:38:55 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:38:55 np0005532761 systemd[1]: Created slice Slice /system/ceph-03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:38:55 np0005532761 systemd[1]: Reached target System Time Set.
Nov 23 15:38:55 np0005532761 systemd[1]: Reached target System Time Synchronized.
Nov 23 15:38:56 np0005532761 systemd[1]: Starting Ceph mon.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:38:56 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:56 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:56 np0005532761 podman[74194]: 2025-11-23 20:38:56.218282375 +0000 UTC m=+0.037335206 container create 366a3f05cddfbf76bb7f47cf5bda89ae37abb680af9b17f858e0ed0db17f4d85 (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f865f33c5993e533ac7bd86403a22a96dbdc9d08e1b4e2776d0e3662ca64ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f865f33c5993e533ac7bd86403a22a96dbdc9d08e1b4e2776d0e3662ca64ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f865f33c5993e533ac7bd86403a22a96dbdc9d08e1b4e2776d0e3662ca64ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f865f33c5993e533ac7bd86403a22a96dbdc9d08e1b4e2776d0e3662ca64ea/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 podman[74194]: 2025-11-23 20:38:56.27962824 +0000 UTC m=+0.098681071 container init 366a3f05cddfbf76bb7f47cf5bda89ae37abb680af9b17f858e0ed0db17f4d85 (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:38:56 np0005532761 podman[74194]: 2025-11-23 20:38:56.289916514 +0000 UTC m=+0.108969375 container start 366a3f05cddfbf76bb7f47cf5bda89ae37abb680af9b17f858e0ed0db17f4d85 (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Nov 23 15:38:56 np0005532761 bash[74194]: 366a3f05cddfbf76bb7f47cf5bda89ae37abb680af9b17f858e0ed0db17f4d85
Nov 23 15:38:56 np0005532761 podman[74194]: 2025-11-23 20:38:56.202394272 +0000 UTC m=+0.021447103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:56 np0005532761 systemd[1]: Started Ceph mon.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: pidfile_write: ignore empty --pid-file
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: load: jerasure load: lrc 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: RocksDB version: 7.9.2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Git sha 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: DB SUMMARY
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: DB Session ID:  NN0G1O8S1N6GH5MU8835
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: CURRENT file:  CURRENT
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                         Options.error_if_exists: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                       Options.create_if_missing: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                                     Options.env: 0x5589bf063c20
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                                Options.info_log: 0x5589c092cd60
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                              Options.statistics: (nil)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                               Options.use_fsync: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                              Options.db_log_dir: 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                                 Options.wal_dir: 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                    Options.write_buffer_manager: 0x5589c0931900
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.unordered_write: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                               Options.row_cache: None
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                              Options.wal_filter: None
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.two_write_queues: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.wal_compression: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.atomic_flush: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.max_background_jobs: 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.max_background_compactions: -1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.max_subcompactions: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.max_total_wal_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                          Options.max_open_files: -1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:       Options.compaction_readahead_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Compression algorithms supported:
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kZSTD supported: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kXpressCompression supported: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kZlibCompression supported: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:           Options.merge_operator: 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:        Options.compaction_filter: None
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5589c092c500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5589c0951350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:        Options.write_buffer_size: 33554432
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:  Options.max_write_buffer_number: 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:          Options.compression: NoCompression
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.num_levels: 7
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 856a136e-5a38-4ae3-9b7b-c6eb86cfb78d
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930336331190, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930336333212, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "NN0G1O8S1N6GH5MU8835", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930336333307, "job": 1, "event": "recovery_finished"}
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5589c0952e00
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: DB pointer 0x5589c0a5c000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5589c0951350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@-1(???) e0 preinit fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T20:38:54.371685+0000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : created 2025-11-23T20:38:54.371685+0000
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).mds e1 new map
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-11-23T20:38:56:367641+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : fsmap 
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mkfs 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 23 15:38:56 np0005532761 podman[74215]: 2025-11-23 20:38:56.396950748 +0000 UTC m=+0.055954853 container create e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063 (image=quay.io/ceph/ceph:v19, name=distracted_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:56 np0005532761 systemd[1]: Started libpod-conmon-e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063.scope.
Nov 23 15:38:56 np0005532761 podman[74215]: 2025-11-23 20:38:56.375440284 +0000 UTC m=+0.034444409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5000162ce6e8750c6ec85393cd6af7c8d72f72ffca9d1171d23f81118dbb971/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5000162ce6e8750c6ec85393cd6af7c8d72f72ffca9d1171d23f81118dbb971/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5000162ce6e8750c6ec85393cd6af7c8d72f72ffca9d1171d23f81118dbb971/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 podman[74215]: 2025-11-23 20:38:56.507538105 +0000 UTC m=+0.166542230 container init e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063 (image=quay.io/ceph/ceph:v19, name=distracted_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:38:56 np0005532761 podman[74215]: 2025-11-23 20:38:56.515916549 +0000 UTC m=+0.174920654 container start e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063 (image=quay.io/ceph/ceph:v19, name=distracted_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:38:56 np0005532761 podman[74215]: 2025-11-23 20:38:56.519725689 +0000 UTC m=+0.178729814 container attach e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063 (image=quay.io/ceph/ceph:v19, name=distracted_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 15:38:56 np0005532761 ceph-mon[74213]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272660486' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:  cluster:
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    id:     03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    health: HEALTH_OK
Nov 23 15:38:56 np0005532761 distracted_allen[74268]: 
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:  services:
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    mon: 1 daemons, quorum compute-0 (age 0.379911s)
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    mgr: no daemons active
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    osd: 0 osds: 0 up, 0 in
Nov 23 15:38:56 np0005532761 distracted_allen[74268]: 
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:  data:
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    pools:   0 pools, 0 pgs
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    objects: 0 objects, 0 B
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    usage:   0 B used, 0 B / 0 B avail
Nov 23 15:38:56 np0005532761 distracted_allen[74268]:    pgs:     
Nov 23 15:38:56 np0005532761 distracted_allen[74268]: 
Nov 23 15:38:56 np0005532761 systemd[1]: libpod-e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063.scope: Deactivated successfully.
Nov 23 15:38:56 np0005532761 conmon[74268]: conmon e968e736cf96586acb89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063.scope/container/memory.events
Nov 23 15:38:56 np0005532761 podman[74215]: 2025-11-23 20:38:56.765204433 +0000 UTC m=+0.424208578 container died e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063 (image=quay.io/ceph/ceph:v19, name=distracted_allen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 15:38:56 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d5000162ce6e8750c6ec85393cd6af7c8d72f72ffca9d1171d23f81118dbb971-merged.mount: Deactivated successfully.
Nov 23 15:38:56 np0005532761 podman[74215]: 2025-11-23 20:38:56.80749389 +0000 UTC m=+0.466498025 container remove e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063 (image=quay.io/ceph/ceph:v19, name=distracted_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:56 np0005532761 systemd[1]: libpod-conmon-e968e736cf96586acb89f619ce1b6c41284da9faa9b594b96e5c3816e6f60063.scope: Deactivated successfully.
Nov 23 15:38:56 np0005532761 podman[74305]: 2025-11-23 20:38:56.895863645 +0000 UTC m=+0.053936018 container create b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513 (image=quay.io/ceph/ceph:v19, name=nervous_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:38:56 np0005532761 systemd[1]: Started libpod-conmon-b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513.scope.
Nov 23 15:38:56 np0005532761 podman[74305]: 2025-11-23 20:38:56.866555374 +0000 UTC m=+0.024627797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ccfb2b4f911b8e31a29be5318e90147798fa590f288899ffe2e6a6e5a6afed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ccfb2b4f911b8e31a29be5318e90147798fa590f288899ffe2e6a6e5a6afed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ccfb2b4f911b8e31a29be5318e90147798fa590f288899ffe2e6a6e5a6afed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ccfb2b4f911b8e31a29be5318e90147798fa590f288899ffe2e6a6e5a6afed/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:57 np0005532761 podman[74305]: 2025-11-23 20:38:57.009398801 +0000 UTC m=+0.167471264 container init b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513 (image=quay.io/ceph/ceph:v19, name=nervous_chandrasekhar, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 15:38:57 np0005532761 podman[74305]: 2025-11-23 20:38:57.016182422 +0000 UTC m=+0.174254795 container start b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513 (image=quay.io/ceph/ceph:v19, name=nervous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:38:57 np0005532761 podman[74305]: 2025-11-23 20:38:57.021044713 +0000 UTC m=+0.179117166 container attach b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513 (image=quay.io/ceph/ceph:v19, name=nervous_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/154113740' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/154113740' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 23 15:38:57 np0005532761 nervous_chandrasekhar[74321]: 
Nov 23 15:38:57 np0005532761 nervous_chandrasekhar[74321]: [global]
Nov 23 15:38:57 np0005532761 nervous_chandrasekhar[74321]: #011fsid = 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:57 np0005532761 nervous_chandrasekhar[74321]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 23 15:38:57 np0005532761 systemd[1]: libpod-b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513.scope: Deactivated successfully.
Nov 23 15:38:57 np0005532761 podman[74305]: 2025-11-23 20:38:57.221456174 +0000 UTC m=+0.379528587 container died b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513 (image=quay.io/ceph/ceph:v19, name=nervous_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 15:38:57 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d0ccfb2b4f911b8e31a29be5318e90147798fa590f288899ffe2e6a6e5a6afed-merged.mount: Deactivated successfully.
Nov 23 15:38:57 np0005532761 podman[74305]: 2025-11-23 20:38:57.267921562 +0000 UTC m=+0.425993945 container remove b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513 (image=quay.io/ceph/ceph:v19, name=nervous_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:57 np0005532761 systemd[1]: libpod-conmon-b60f17a6c36ef7829b17d41beb31a540d3ef705e567809d5b45203d931e79513.scope: Deactivated successfully.
Nov 23 15:38:57 np0005532761 podman[74361]: 2025-11-23 20:38:57.356953745 +0000 UTC m=+0.064636444 container create 9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c (image=quay.io/ceph/ceph:v19, name=youthful_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:57 np0005532761 systemd[1]: Started libpod-conmon-9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c.scope.
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: from='client.? 192.168.122.100:0/154113740' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: from='client.? 192.168.122.100:0/154113740' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 23 15:38:57 np0005532761 podman[74361]: 2025-11-23 20:38:57.318955362 +0000 UTC m=+0.026638131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:57 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11bec26005f77473aac24340e8eadf3088ac7ddeb1e68962591d083d8b0ea221/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11bec26005f77473aac24340e8eadf3088ac7ddeb1e68962591d083d8b0ea221/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11bec26005f77473aac24340e8eadf3088ac7ddeb1e68962591d083d8b0ea221/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11bec26005f77473aac24340e8eadf3088ac7ddeb1e68962591d083d8b0ea221/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:57 np0005532761 podman[74361]: 2025-11-23 20:38:57.44342829 +0000 UTC m=+0.151111049 container init 9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c (image=quay.io/ceph/ceph:v19, name=youthful_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:38:57 np0005532761 podman[74361]: 2025-11-23 20:38:57.45241126 +0000 UTC m=+0.160093969 container start 9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c (image=quay.io/ceph/ceph:v19, name=youthful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Nov 23 15:38:57 np0005532761 podman[74361]: 2025-11-23 20:38:57.457859765 +0000 UTC m=+0.165542475 container attach 9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c (image=quay.io/ceph/ceph:v19, name=youthful_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568576604' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:38:57 np0005532761 systemd[1]: libpod-9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c.scope: Deactivated successfully.
Nov 23 15:38:57 np0005532761 podman[74361]: 2025-11-23 20:38:57.66366667 +0000 UTC m=+0.371349349 container died 9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c (image=quay.io/ceph/ceph:v19, name=youthful_ritchie, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:38:57 np0005532761 systemd[1]: var-lib-containers-storage-overlay-11bec26005f77473aac24340e8eadf3088ac7ddeb1e68962591d083d8b0ea221-merged.mount: Deactivated successfully.
Nov 23 15:38:57 np0005532761 podman[74361]: 2025-11-23 20:38:57.710647153 +0000 UTC m=+0.418329842 container remove 9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c (image=quay.io/ceph/ceph:v19, name=youthful_ritchie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:57 np0005532761 systemd[1]: libpod-conmon-9556348415b633814d25d635769ac4db2fb4b4d4b9d302d4208bf477e180f12c.scope: Deactivated successfully.
Nov 23 15:38:57 np0005532761 systemd[1]: Stopping Ceph mon.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: mon.compute-0@0(leader) e1 shutdown
Nov 23 15:38:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0[74209]: 2025-11-23T20:38:57.924+0000 7f1be5ce0640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 23 15:38:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0[74209]: 2025-11-23T20:38:57.924+0000 7f1be5ce0640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 23 15:38:57 np0005532761 ceph-mon[74213]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 23 15:38:58 np0005532761 podman[74448]: 2025-11-23 20:38:58.153403814 +0000 UTC m=+0.273020128 container died 366a3f05cddfbf76bb7f47cf5bda89ae37abb680af9b17f858e0ed0db17f4d85 (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:38:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-74f865f33c5993e533ac7bd86403a22a96dbdc9d08e1b4e2776d0e3662ca64ea-merged.mount: Deactivated successfully.
Nov 23 15:38:58 np0005532761 podman[74448]: 2025-11-23 20:38:58.187409461 +0000 UTC m=+0.307025755 container remove 366a3f05cddfbf76bb7f47cf5bda89ae37abb680af9b17f858e0ed0db17f4d85 (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:58 np0005532761 bash[74448]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0
Nov 23 15:38:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:38:58 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@mon.compute-0.service: Deactivated successfully.
Nov 23 15:38:58 np0005532761 systemd[1]: Stopped Ceph mon.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:38:58 np0005532761 systemd[1]: Starting Ceph mon.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:38:58 np0005532761 podman[74551]: 2025-11-23 20:38:58.557880575 +0000 UTC m=+0.037756067 container create 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:38:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba872ed154a2371bdc63404a3274748bb602c9d12c4cee1424ae4196018e93e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba872ed154a2371bdc63404a3274748bb602c9d12c4cee1424ae4196018e93e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba872ed154a2371bdc63404a3274748bb602c9d12c4cee1424ae4196018e93e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba872ed154a2371bdc63404a3274748bb602c9d12c4cee1424ae4196018e93e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:58 np0005532761 podman[74551]: 2025-11-23 20:38:58.632146645 +0000 UTC m=+0.112022127 container init 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 15:38:58 np0005532761 podman[74551]: 2025-11-23 20:38:58.541476248 +0000 UTC m=+0.021351730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:58 np0005532761 podman[74551]: 2025-11-23 20:38:58.639779309 +0000 UTC m=+0.119654771 container start 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:38:58 np0005532761 bash[74551]: 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f
Nov 23 15:38:58 np0005532761 systemd[1]: Started Ceph mon.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: pidfile_write: ignore empty --pid-file
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: load: jerasure load: lrc 
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: RocksDB version: 7.9.2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Git sha 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: DB SUMMARY
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: DB Session ID:  Q7AUUU8H5P8CM37LFPNC
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: CURRENT file:  CURRENT
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58735 ; 
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                         Options.error_if_exists: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                       Options.create_if_missing: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                                     Options.env: 0x55cf3daeec20
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                                Options.info_log: 0x55cf3f919ac0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                              Options.statistics: (nil)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                               Options.use_fsync: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                              Options.db_log_dir: 
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                                 Options.wal_dir: 
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                    Options.write_buffer_manager: 0x55cf3f91d900
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.unordered_write: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                               Options.row_cache: None
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                              Options.wal_filter: None
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.two_write_queues: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.wal_compression: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.atomic_flush: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.max_background_jobs: 2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.max_background_compactions: -1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.max_subcompactions: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.max_total_wal_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                          Options.max_open_files: -1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:       Options.compaction_readahead_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Compression algorithms supported:
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kZSTD supported: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kXpressCompression supported: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kZlibCompression supported: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:           Options.merge_operator: 
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:        Options.compaction_filter: None
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf3f918aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf3f93d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:        Options.write_buffer_size: 33554432
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:  Options.max_write_buffer_number: 2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:          Options.compression: NoCompression
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.num_levels: 7
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 856a136e-5a38-4ae3-9b7b-c6eb86cfb78d
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930338679678, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930338684711, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56960, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54477, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930338, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930338684933, "job": 1, "event": "recovery_finished"}
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cf3f93ee00
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: DB pointer 0x55cf3fa48000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.01 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   59.01 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf3f93d350#2 capacity: 512.00 MB usage: 1.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???) e1 preinit fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???).mds e1 new map
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-11-23T20:38:56:367641+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 23 15:38:58 np0005532761 podman[74570]: 2025-11-23 20:38:58.741723055 +0000 UTC m=+0.060430212 container create 3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b (image=quay.io/ceph/ceph:v19, name=jovial_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T20:38:54.371685+0000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : created 2025-11-23T20:38:54.371685+0000
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap 
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 23 15:38:58 np0005532761 systemd[1]: Started libpod-conmon-3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b.scope.
Nov 23 15:38:58 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077724a9caa192894b8fcbf8482b16d35f8908937d8f30da65cf18f4ac91eab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077724a9caa192894b8fcbf8482b16d35f8908937d8f30da65cf18f4ac91eab6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077724a9caa192894b8fcbf8482b16d35f8908937d8f30da65cf18f4ac91eab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:58 np0005532761 podman[74570]: 2025-11-23 20:38:58.704325779 +0000 UTC m=+0.023032956 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:58 np0005532761 podman[74570]: 2025-11-23 20:38:58.831756125 +0000 UTC m=+0.150463282 container init 3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b (image=quay.io/ceph/ceph:v19, name=jovial_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 23 15:38:58 np0005532761 podman[74570]: 2025-11-23 20:38:58.83868194 +0000 UTC m=+0.157389107 container start 3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b (image=quay.io/ceph/ceph:v19, name=jovial_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:38:58 np0005532761 ceph-mon[74569]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 23 15:38:58 np0005532761 podman[74570]: 2025-11-23 20:38:58.891154958 +0000 UTC m=+0.209862165 container attach 3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b (image=quay.io/ceph/ceph:v19, name=jovial_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:38:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Nov 23 15:38:59 np0005532761 systemd[1]: libpod-3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b.scope: Deactivated successfully.
Nov 23 15:38:59 np0005532761 podman[74570]: 2025-11-23 20:38:59.039254336 +0000 UTC m=+0.357961533 container died 3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b (image=quay.io/ceph/ceph:v19, name=jovial_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:38:59 np0005532761 podman[74570]: 2025-11-23 20:38:59.292143916 +0000 UTC m=+0.610851083 container remove 3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b (image=quay.io/ceph/ceph:v19, name=jovial_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:38:59 np0005532761 systemd[1]: libpod-conmon-3040548e42317a39a53a3ebd0ac2f614e68a5322bb2af83b91ba013e938ca07b.scope: Deactivated successfully.
Nov 23 15:38:59 np0005532761 podman[74664]: 2025-11-23 20:38:59.351618141 +0000 UTC m=+0.029863946 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:38:59 np0005532761 podman[74664]: 2025-11-23 20:38:59.457257377 +0000 UTC m=+0.135503102 container create 63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236 (image=quay.io/ceph/ceph:v19, name=recursing_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:38:59 np0005532761 systemd[1]: Started libpod-conmon-63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236.scope.
Nov 23 15:38:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:38:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a59471d77ca3253b91f699326f33c86fde3b0143eeb98ebcf5197f579950de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a59471d77ca3253b91f699326f33c86fde3b0143eeb98ebcf5197f579950de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a59471d77ca3253b91f699326f33c86fde3b0143eeb98ebcf5197f579950de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:38:59 np0005532761 podman[74664]: 2025-11-23 20:38:59.63558677 +0000 UTC m=+0.313832505 container init 63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236 (image=quay.io/ceph/ceph:v19, name=recursing_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:59 np0005532761 podman[74664]: 2025-11-23 20:38:59.647347604 +0000 UTC m=+0.325593329 container start 63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236 (image=quay.io/ceph/ceph:v19, name=recursing_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:38:59 np0005532761 podman[74664]: 2025-11-23 20:38:59.683245271 +0000 UTC m=+0.361491026 container attach 63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236 (image=quay.io/ceph/ceph:v19, name=recursing_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:38:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Nov 23 15:38:59 np0005532761 systemd[1]: libpod-63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236.scope: Deactivated successfully.
Nov 23 15:38:59 np0005532761 podman[74664]: 2025-11-23 20:38:59.884325151 +0000 UTC m=+0.562570906 container died 63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236 (image=quay.io/ceph/ceph:v19, name=recursing_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:38:59 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b8a59471d77ca3253b91f699326f33c86fde3b0143eeb98ebcf5197f579950de-merged.mount: Deactivated successfully.
Nov 23 15:38:59 np0005532761 podman[74664]: 2025-11-23 20:38:59.934920799 +0000 UTC m=+0.613166514 container remove 63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236 (image=quay.io/ceph/ceph:v19, name=recursing_brown, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 15:38:59 np0005532761 systemd[1]: libpod-conmon-63a013ce1366bb8fd51fb1fe8bc377084db841b5825f82c3e5066d3e93ad6236.scope: Deactivated successfully.
Nov 23 15:39:00 np0005532761 systemd[1]: Reloading.
Nov 23 15:39:00 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:39:00 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:39:00 np0005532761 systemd[1]: Reloading.
Nov 23 15:39:00 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:39:00 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:39:00 np0005532761 systemd[1]: Starting Ceph mgr.compute-0.oyehye for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:39:00 np0005532761 podman[74849]: 2025-11-23 20:39:00.847771211 +0000 UTC m=+0.045566086 container create 47b4a98cc84dc613fcce9e91597c1343fbbf5de49af1759bea757baaeb81802d (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:39:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cad2b6efdc4eadab116328c511b6abb491e7188fcd6d2eb4c1843cada0209f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cad2b6efdc4eadab116328c511b6abb491e7188fcd6d2eb4c1843cada0209f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cad2b6efdc4eadab116328c511b6abb491e7188fcd6d2eb4c1843cada0209f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cad2b6efdc4eadab116328c511b6abb491e7188fcd6d2eb4c1843cada0209f/merged/var/lib/ceph/mgr/ceph-compute-0.oyehye supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:00 np0005532761 podman[74849]: 2025-11-23 20:39:00.825350973 +0000 UTC m=+0.023145758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:00 np0005532761 podman[74849]: 2025-11-23 20:39:00.924635559 +0000 UTC m=+0.122430384 container init 47b4a98cc84dc613fcce9e91597c1343fbbf5de49af1759bea757baaeb81802d (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:39:00 np0005532761 podman[74849]: 2025-11-23 20:39:00.934717418 +0000 UTC m=+0.132512183 container start 47b4a98cc84dc613fcce9e91597c1343fbbf5de49af1759bea757baaeb81802d (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:00 np0005532761 bash[74849]: 47b4a98cc84dc613fcce9e91597c1343fbbf5de49af1759bea757baaeb81802d
Nov 23 15:39:00 np0005532761 systemd[1]: Started Ceph mgr.compute-0.oyehye for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:39:00 np0005532761 ceph-mgr[74869]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:39:00 np0005532761 ceph-mgr[74869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:39:00 np0005532761 ceph-mgr[74869]: pidfile_write: ignore empty --pid-file
Nov 23 15:39:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'alerts'
Nov 23 15:39:01 np0005532761 podman[74870]: 2025-11-23 20:39:01.069084599 +0000 UTC m=+0.078223446 container create e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad (image=quay.io/ceph/ceph:v19, name=dreamy_dewdney, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 15:39:01 np0005532761 ceph-mgr[74869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:39:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'balancer'
Nov 23 15:39:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:01.107+0000 7fd620f2f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:39:01 np0005532761 systemd[1]: Started libpod-conmon-e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad.scope.
Nov 23 15:39:01 np0005532761 podman[74870]: 2025-11-23 20:39:01.037511557 +0000 UTC m=+0.046650414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f16e44e007a540a6bd6c03d75f95a44d8d5fbd9496bd0f81052d782b54218c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f16e44e007a540a6bd6c03d75f95a44d8d5fbd9496bd0f81052d782b54218c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f16e44e007a540a6bd6c03d75f95a44d8d5fbd9496bd0f81052d782b54218c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:01 np0005532761 podman[74870]: 2025-11-23 20:39:01.165458328 +0000 UTC m=+0.174597185 container init e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad (image=quay.io/ceph/ceph:v19, name=dreamy_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:39:01 np0005532761 podman[74870]: 2025-11-23 20:39:01.174436067 +0000 UTC m=+0.183574924 container start e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad (image=quay.io/ceph/ceph:v19, name=dreamy_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:39:01 np0005532761 podman[74870]: 2025-11-23 20:39:01.17865675 +0000 UTC m=+0.187795607 container attach e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad (image=quay.io/ceph/ceph:v19, name=dreamy_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:39:01 np0005532761 ceph-mgr[74869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:39:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'cephadm'
Nov 23 15:39:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:01.193+0000 7fd620f2f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:39:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 23 15:39:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208862791' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]: 
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]: {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "health": {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "status": "HEALTH_OK",
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "checks": {},
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "mutes": []
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    },
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "election_epoch": 5,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "quorum": [
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        0
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    ],
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "quorum_names": [
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "compute-0"
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    ],
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "quorum_age": 2,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "monmap": {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "epoch": 1,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "min_mon_release_name": "squid",
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_mons": 1
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    },
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "osdmap": {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "epoch": 1,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_osds": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_up_osds": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "osd_up_since": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_in_osds": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "osd_in_since": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_remapped_pgs": 0
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    },
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "pgmap": {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "pgs_by_state": [],
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_pgs": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_pools": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_objects": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "data_bytes": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "bytes_used": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "bytes_avail": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "bytes_total": 0
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    },
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "fsmap": {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "epoch": 1,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "btime": "2025-11-23T20:38:56:367641+0000",
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "by_rank": [],
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "up:standby": 0
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    },
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "mgrmap": {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "available": false,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "num_standbys": 0,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "modules": [
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:            "iostat",
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:            "nfs",
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:            "restful"
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        ],
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "services": {}
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    },
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "servicemap": {
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "epoch": 1,
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "modified": "2025-11-23T20:38:56.370068+0000",
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:        "services": {}
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    },
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]:    "progress_events": {}
Nov 23 15:39:01 np0005532761 dreamy_dewdney[74906]: }
Nov 23 15:39:01 np0005532761 systemd[1]: libpod-e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad.scope: Deactivated successfully.
Nov 23 15:39:01 np0005532761 podman[74870]: 2025-11-23 20:39:01.378354763 +0000 UTC m=+0.387493600 container died e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad (image=quay.io/ceph/ceph:v19, name=dreamy_dewdney, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:39:01 np0005532761 systemd[1]: var-lib-containers-storage-overlay-50f16e44e007a540a6bd6c03d75f95a44d8d5fbd9496bd0f81052d782b54218c-merged.mount: Deactivated successfully.
Nov 23 15:39:01 np0005532761 podman[74870]: 2025-11-23 20:39:01.434063477 +0000 UTC m=+0.443202314 container remove e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad (image=quay.io/ceph/ceph:v19, name=dreamy_dewdney, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:01 np0005532761 systemd[1]: libpod-conmon-e0aeb152c428a2258f7b8495af0110a0156dafc7a6fbc6c3d2d2cb6002ccffad.scope: Deactivated successfully.
Nov 23 15:39:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'crash'
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'dashboard'
Nov 23 15:39:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:02.021+0000 7fd620f2f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:39:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:02.665+0000 7fd620f2f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:39:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:39:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  from numpy import show_config as show_numpy_config
Nov 23 15:39:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:02.835+0000 7fd620f2f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'influx'
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'insights'
Nov 23 15:39:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:02.915+0000 7fd620f2f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:39:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'iostat'
Nov 23 15:39:03 np0005532761 ceph-mgr[74869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:39:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:39:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:03.051+0000 7fd620f2f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:39:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'localpool'
Nov 23 15:39:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:39:03 np0005532761 podman[74958]: 2025-11-23 20:39:03.54340694 +0000 UTC m=+0.065965629 container create 45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb (image=quay.io/ceph/ceph:v19, name=gracious_williamson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 15:39:03 np0005532761 systemd[1]: Started libpod-conmon-45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb.scope.
Nov 23 15:39:03 np0005532761 podman[74958]: 2025-11-23 20:39:03.509867096 +0000 UTC m=+0.032425835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:03 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea004cb9685132a8d149e014df3427718e04d871d1ab26eb13ee3dd784a7e324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea004cb9685132a8d149e014df3427718e04d871d1ab26eb13ee3dd784a7e324/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea004cb9685132a8d149e014df3427718e04d871d1ab26eb13ee3dd784a7e324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:03 np0005532761 podman[74958]: 2025-11-23 20:39:03.71372188 +0000 UTC m=+0.236280589 container init 45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb (image=quay.io/ceph/ceph:v19, name=gracious_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:03 np0005532761 podman[74958]: 2025-11-23 20:39:03.719411222 +0000 UTC m=+0.241969911 container start 45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb (image=quay.io/ceph/ceph:v19, name=gracious_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:39:03 np0005532761 podman[74958]: 2025-11-23 20:39:03.724532657 +0000 UTC m=+0.247091346 container attach 45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb (image=quay.io/ceph/ceph:v19, name=gracious_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mirroring'
Nov 23 15:39:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'nfs'
Nov 23 15:39:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 23 15:39:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861805956' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]: 
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]: {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "health": {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "status": "HEALTH_OK",
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "checks": {},
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "mutes": []
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    },
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "election_epoch": 5,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "quorum": [
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        0
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    ],
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "quorum_names": [
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "compute-0"
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    ],
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "quorum_age": 5,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "monmap": {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "epoch": 1,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "min_mon_release_name": "squid",
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_mons": 1
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    },
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "osdmap": {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "epoch": 1,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_osds": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_up_osds": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "osd_up_since": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_in_osds": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "osd_in_since": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_remapped_pgs": 0
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    },
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "pgmap": {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "pgs_by_state": [],
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_pgs": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_pools": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_objects": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "data_bytes": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "bytes_used": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "bytes_avail": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "bytes_total": 0
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    },
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "fsmap": {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "epoch": 1,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "btime": "2025-11-23T20:38:56:367641+0000",
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "by_rank": [],
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "up:standby": 0
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    },
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "mgrmap": {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "available": false,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "num_standbys": 0,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "modules": [
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:            "iostat",
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:            "nfs",
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:            "restful"
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        ],
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "services": {}
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    },
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "servicemap": {
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "epoch": 1,
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "modified": "2025-11-23T20:38:56.370068+0000",
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:        "services": {}
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    },
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]:    "progress_events": {}
Nov 23 15:39:03 np0005532761 gracious_williamson[74974]: }
Nov 23 15:39:03 np0005532761 podman[74958]: 2025-11-23 20:39:03.909122698 +0000 UTC m=+0.431681387 container died 45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb (image=quay.io/ceph/ceph:v19, name=gracious_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:03 np0005532761 systemd[1]: libpod-45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb.scope: Deactivated successfully.
Nov 23 15:39:04 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ea004cb9685132a8d149e014df3427718e04d871d1ab26eb13ee3dd784a7e324-merged.mount: Deactivated successfully.
Nov 23 15:39:04 np0005532761 podman[74958]: 2025-11-23 20:39:04.094521309 +0000 UTC m=+0.617080018 container remove 45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb (image=quay.io/ceph/ceph:v19, name=gracious_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:39:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:04.096+0000 7fd620f2f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 systemd[1]: libpod-conmon-45b1e5eb3c4fd9b5de365d0b40225e5c29d2af3645e286617174a99268830beb.scope: Deactivated successfully.
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:39:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:04.304+0000 7fd620f2f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_support'
Nov 23 15:39:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:04.376+0000 7fd620f2f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:39:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:04.439+0000 7fd620f2f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'progress'
Nov 23 15:39:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:04.517+0000 7fd620f2f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'prometheus'
Nov 23 15:39:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:04.592+0000 7fd620f2f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:04.945+0000 7fd620f2f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:39:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:39:05 np0005532761 ceph-mgr[74869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:39:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'restful'
Nov 23 15:39:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:05.047+0000 7fd620f2f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:39:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rgw'
Nov 23 15:39:05 np0005532761 ceph-mgr[74869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:39:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rook'
Nov 23 15:39:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:05.483+0000 7fd620f2f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'selftest'
Nov 23 15:39:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:06.049+0000 7fd620f2f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:39:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:06.118+0000 7fd620f2f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 podman[75012]: 2025-11-23 20:39:06.170712138 +0000 UTC m=+0.041672301 container create f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15 (image=quay.io/ceph/ceph:v19, name=kind_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:06.203+0000 7fd620f2f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'stats'
Nov 23 15:39:06 np0005532761 systemd[1]: Started libpod-conmon-f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15.scope.
Nov 23 15:39:06 np0005532761 podman[75012]: 2025-11-23 20:39:06.150537491 +0000 UTC m=+0.021497664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:06 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5804dfe9b0387b515d63c218ce1d71dfc05ac7dd1fe91a29e90cda447dab6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5804dfe9b0387b515d63c218ce1d71dfc05ac7dd1fe91a29e90cda447dab6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5804dfe9b0387b515d63c218ce1d71dfc05ac7dd1fe91a29e90cda447dab6e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'status'
Nov 23 15:39:06 np0005532761 podman[75012]: 2025-11-23 20:39:06.289890155 +0000 UTC m=+0.160850368 container init f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15 (image=quay.io/ceph/ceph:v19, name=kind_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 15:39:06 np0005532761 podman[75012]: 2025-11-23 20:39:06.298093313 +0000 UTC m=+0.169053476 container start f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15 (image=quay.io/ceph/ceph:v19, name=kind_goodall, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:06 np0005532761 podman[75012]: 2025-11-23 20:39:06.302662786 +0000 UTC m=+0.173622919 container attach f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15 (image=quay.io/ceph/ceph:v19, name=kind_goodall, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telegraf'
Nov 23 15:39:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:06.356+0000 7fd620f2f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telemetry'
Nov 23 15:39:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:06.432+0000 7fd620f2f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 23 15:39:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/593026524' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 15:39:06 np0005532761 kind_goodall[75029]: 
Nov 23 15:39:06 np0005532761 kind_goodall[75029]: {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "health": {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "status": "HEALTH_OK",
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "checks": {},
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "mutes": []
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    },
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "election_epoch": 5,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "quorum": [
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        0
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    ],
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "quorum_names": [
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "compute-0"
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    ],
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "quorum_age": 7,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "monmap": {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "epoch": 1,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "min_mon_release_name": "squid",
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_mons": 1
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    },
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "osdmap": {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "epoch": 1,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_osds": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_up_osds": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "osd_up_since": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_in_osds": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "osd_in_since": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_remapped_pgs": 0
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    },
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "pgmap": {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "pgs_by_state": [],
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_pgs": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_pools": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_objects": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "data_bytes": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "bytes_used": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "bytes_avail": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "bytes_total": 0
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    },
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "fsmap": {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "epoch": 1,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "btime": "2025-11-23T20:38:56:367641+0000",
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "by_rank": [],
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "up:standby": 0
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    },
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "mgrmap": {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "available": false,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "num_standbys": 0,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "modules": [
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:            "iostat",
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:            "nfs",
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:            "restful"
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        ],
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "services": {}
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    },
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "servicemap": {
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "epoch": 1,
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "modified": "2025-11-23T20:38:56.370068+0000",
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:        "services": {}
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    },
Nov 23 15:39:06 np0005532761 kind_goodall[75029]:    "progress_events": {}
Nov 23 15:39:06 np0005532761 kind_goodall[75029]: }
Nov 23 15:39:06 np0005532761 systemd[1]: libpod-f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15.scope: Deactivated successfully.
Nov 23 15:39:06 np0005532761 podman[75012]: 2025-11-23 20:39:06.533682804 +0000 UTC m=+0.404642927 container died f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15 (image=quay.io/ceph/ceph:v19, name=kind_goodall, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:06.592+0000 7fd620f2f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:39:06 np0005532761 systemd[1]: var-lib-containers-storage-overlay-fd5804dfe9b0387b515d63c218ce1d71dfc05ac7dd1fe91a29e90cda447dab6e-merged.mount: Deactivated successfully.
Nov 23 15:39:06 np0005532761 podman[75012]: 2025-11-23 20:39:06.88698083 +0000 UTC m=+0.757940963 container remove f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15 (image=quay.io/ceph/ceph:v19, name=kind_goodall, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:39:06 np0005532761 systemd[1]: libpod-conmon-f8e2a7739c3faed1d2380cb95cd735a366217f26efa1ba30b453c406442f7f15.scope: Deactivated successfully.
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:06.915+0000 7fd620f2f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'volumes'
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'zabbix'
Nov 23 15:39:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:07.189+0000 7fd620f2f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:39:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:07.260+0000 7fd620f2f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: ms_deliver_dispatch: unhandled message 0x560d03c069c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.oyehye
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.oyehye(active, starting, since 0.00951723s)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map Activating!
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map I am now activating
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e1 all = 1
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"} v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: balancer
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: crash
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [balancer INFO root] Starting
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Manager daemon compute-0.oyehye is now available
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:39:07
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [balancer INFO root] No pools available
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: devicehealth
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Starting
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: iostat
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: nfs
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: orchestrator
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: pg_autoscaler
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: progress
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [progress INFO root] Loading...
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [progress INFO root] No stored events to load
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded [] historic events
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded OSDMap, ready.
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] recovery thread starting
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] starting setup
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: rbd_support
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: restful
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [restful INFO root] server_addr: :: server_port: 8003
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [restful WARNING root] server not running: no certificate configured
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"} v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: status
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: telemetry
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] PerfHandler: starting
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TaskHandler: starting
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"} v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: volumes
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 23 15:39:07 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] setup complete
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: Activating manager daemon compute-0.oyehye
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: Manager daemon compute-0.oyehye is now available
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:07 np0005532761 ceph-mon[74569]: from='mgr.14102 192.168.122.100:0/156188106' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:08 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.oyehye(active, since 1.02697s)
Nov 23 15:39:08 np0005532761 podman[75148]: 2025-11-23 20:39:08.951156129 +0000 UTC m=+0.038080696 container create b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae (image=quay.io/ceph/ceph:v19, name=quirky_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:08 np0005532761 systemd[1]: Started libpod-conmon-b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae.scope.
Nov 23 15:39:09 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb56376b36327b66148295d61b26b484178d14c224608d744c998664ebe4c850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb56376b36327b66148295d61b26b484178d14c224608d744c998664ebe4c850/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb56376b36327b66148295d61b26b484178d14c224608d744c998664ebe4c850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:09 np0005532761 podman[75148]: 2025-11-23 20:39:08.934057024 +0000 UTC m=+0.020981561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:09 np0005532761 podman[75148]: 2025-11-23 20:39:09.140473925 +0000 UTC m=+0.227398552 container init b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae (image=quay.io/ceph/ceph:v19, name=quirky_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 15:39:09 np0005532761 podman[75148]: 2025-11-23 20:39:09.145954851 +0000 UTC m=+0.232879388 container start b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae (image=quay.io/ceph/ceph:v19, name=quirky_poincare, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 15:39:09 np0005532761 podman[75148]: 2025-11-23 20:39:09.166550001 +0000 UTC m=+0.253474568 container attach b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae (image=quay.io/ceph/ceph:v19, name=quirky_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:09 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.oyehye(active, since 2s)
Nov 23 15:39:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 23 15:39:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033266383' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]: 
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]: {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "health": {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "status": "HEALTH_OK",
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "checks": {},
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "mutes": []
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    },
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "election_epoch": 5,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "quorum": [
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        0
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    ],
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "quorum_names": [
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "compute-0"
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    ],
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "quorum_age": 10,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "monmap": {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "epoch": 1,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "min_mon_release_name": "squid",
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_mons": 1
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    },
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "osdmap": {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "epoch": 1,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_osds": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_up_osds": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "osd_up_since": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_in_osds": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "osd_in_since": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_remapped_pgs": 0
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    },
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "pgmap": {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "pgs_by_state": [],
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_pgs": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_pools": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_objects": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "data_bytes": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "bytes_used": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "bytes_avail": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "bytes_total": 0
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    },
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "fsmap": {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "epoch": 1,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "btime": "2025-11-23T20:38:56:367641+0000",
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "by_rank": [],
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "up:standby": 0
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    },
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "mgrmap": {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "available": true,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "num_standbys": 0,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "modules": [
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:            "iostat",
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:            "nfs",
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:            "restful"
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        ],
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "services": {}
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    },
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "servicemap": {
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "epoch": 1,
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "modified": "2025-11-23T20:38:56.370068+0000",
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:        "services": {}
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    },
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]:    "progress_events": {}
Nov 23 15:39:09 np0005532761 quirky_poincare[75164]: }
Nov 23 15:39:09 np0005532761 systemd[1]: libpod-b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae.scope: Deactivated successfully.
Nov 23 15:39:09 np0005532761 podman[75148]: 2025-11-23 20:39:09.607385345 +0000 UTC m=+0.694309882 container died b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae (image=quay.io/ceph/ceph:v19, name=quirky_poincare, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:39:09 np0005532761 systemd[1]: var-lib-containers-storage-overlay-cb56376b36327b66148295d61b26b484178d14c224608d744c998664ebe4c850-merged.mount: Deactivated successfully.
Nov 23 15:39:09 np0005532761 podman[75148]: 2025-11-23 20:39:09.644986076 +0000 UTC m=+0.731910613 container remove b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae (image=quay.io/ceph/ceph:v19, name=quirky_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:09 np0005532761 systemd[1]: libpod-conmon-b3bc06a5601f0586b8faf934ab13d05271c9a4ab40155008052f6d08236cedae.scope: Deactivated successfully.
Nov 23 15:39:09 np0005532761 podman[75202]: 2025-11-23 20:39:09.705131278 +0000 UTC m=+0.038016061 container create 0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959 (image=quay.io/ceph/ceph:v19, name=pedantic_kapitsa, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:39:09 np0005532761 systemd[1]: Started libpod-conmon-0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959.scope.
Nov 23 15:39:09 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6a1ccfa47f39483983dbeef4c6de968e9e4e96f5948cb0a229d9f0d1a8dc9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6a1ccfa47f39483983dbeef4c6de968e9e4e96f5948cb0a229d9f0d1a8dc9c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6a1ccfa47f39483983dbeef4c6de968e9e4e96f5948cb0a229d9f0d1a8dc9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6a1ccfa47f39483983dbeef4c6de968e9e4e96f5948cb0a229d9f0d1a8dc9c/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:09 np0005532761 podman[75202]: 2025-11-23 20:39:09.686143473 +0000 UTC m=+0.019028286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:09 np0005532761 podman[75202]: 2025-11-23 20:39:09.788708714 +0000 UTC m=+0.121593527 container init 0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959 (image=quay.io/ceph/ceph:v19, name=pedantic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 23 15:39:09 np0005532761 podman[75202]: 2025-11-23 20:39:09.803281931 +0000 UTC m=+0.136166714 container start 0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959 (image=quay.io/ceph/ceph:v19, name=pedantic_kapitsa, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 15:39:09 np0005532761 podman[75202]: 2025-11-23 20:39:09.809499353 +0000 UTC m=+0.142384136 container attach 0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959 (image=quay.io/ceph/ceph:v19, name=pedantic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 23 15:39:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2515995298' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 15:39:10 np0005532761 pedantic_kapitsa[75218]: 
Nov 23 15:39:10 np0005532761 pedantic_kapitsa[75218]: [global]
Nov 23 15:39:10 np0005532761 pedantic_kapitsa[75218]: #011fsid = 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:39:10 np0005532761 pedantic_kapitsa[75218]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 23 15:39:10 np0005532761 systemd[1]: libpod-0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959.scope: Deactivated successfully.
Nov 23 15:39:10 np0005532761 conmon[75218]: conmon 0d3231ef77b25ebdf9c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959.scope/container/memory.events
Nov 23 15:39:10 np0005532761 podman[75202]: 2025-11-23 20:39:10.184927504 +0000 UTC m=+0.517812297 container died 0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959 (image=quay.io/ceph/ceph:v19, name=pedantic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:10 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5f6a1ccfa47f39483983dbeef4c6de968e9e4e96f5948cb0a229d9f0d1a8dc9c-merged.mount: Deactivated successfully.
Nov 23 15:39:10 np0005532761 podman[75202]: 2025-11-23 20:39:10.221701255 +0000 UTC m=+0.554586038 container remove 0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959 (image=quay.io/ceph/ceph:v19, name=pedantic_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:10 np0005532761 systemd[1]: libpod-conmon-0d3231ef77b25ebdf9c07587d0764d702eae9401001c2b92a120ffe05355e959.scope: Deactivated successfully.
Nov 23 15:39:10 np0005532761 podman[75256]: 2025-11-23 20:39:10.283423176 +0000 UTC m=+0.042881621 container create 4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0 (image=quay.io/ceph/ceph:v19, name=musing_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:10 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2515995298' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 15:39:10 np0005532761 systemd[1]: Started libpod-conmon-4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0.scope.
Nov 23 15:39:10 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:10 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929e27d6a56fb8277a447874a3347a4509645c07d6423d72d8855059f9af6c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:10 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929e27d6a56fb8277a447874a3347a4509645c07d6423d72d8855059f9af6c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:10 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929e27d6a56fb8277a447874a3347a4509645c07d6423d72d8855059f9af6c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:10 np0005532761 podman[75256]: 2025-11-23 20:39:10.263000896 +0000 UTC m=+0.022459331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:10 np0005532761 podman[75256]: 2025-11-23 20:39:10.367439593 +0000 UTC m=+0.126898038 container init 4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0 (image=quay.io/ceph/ceph:v19, name=musing_yonath, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:10 np0005532761 podman[75256]: 2025-11-23 20:39:10.372109527 +0000 UTC m=+0.131567952 container start 4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0 (image=quay.io/ceph/ceph:v19, name=musing_yonath, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:10 np0005532761 podman[75256]: 2025-11-23 20:39:10.37670721 +0000 UTC m=+0.136165635 container attach 4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0 (image=quay.io/ceph/ceph:v19, name=musing_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Nov 23 15:39:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3326957814' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:11 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3326957814' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 23 15:39:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3326957814' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  1: '-n'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  2: 'mgr.compute-0.oyehye'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  3: '-f'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  4: '--setuser'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  5: 'ceph'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  6: '--setgroup'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  7: 'ceph'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr respawn  exe_path /proc/self/exe
Nov 23 15:39:11 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.oyehye(active, since 4s)
Nov 23 15:39:11 np0005532761 systemd[1]: libpod-4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0.scope: Deactivated successfully.
Nov 23 15:39:11 np0005532761 podman[75256]: 2025-11-23 20:39:11.350597903 +0000 UTC m=+1.110056328 container died 4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0 (image=quay.io/ceph/ceph:v19, name=musing_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 23 15:39:11 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0929e27d6a56fb8277a447874a3347a4509645c07d6423d72d8855059f9af6c3-merged.mount: Deactivated successfully.
Nov 23 15:39:11 np0005532761 podman[75256]: 2025-11-23 20:39:11.398520456 +0000 UTC m=+1.157978931 container remove 4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0 (image=quay.io/ceph/ceph:v19, name=musing_yonath, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:11 np0005532761 systemd[1]: libpod-conmon-4c78bd4144b0f7aa237751ea988d4c9a1d5961873b94004197d7e85c94158af0.scope: Deactivated successfully.
Nov 23 15:39:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setuser ceph since I am not root
Nov 23 15:39:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setgroup ceph since I am not root
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: pidfile_write: ignore empty --pid-file
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'alerts'
Nov 23 15:39:11 np0005532761 podman[75309]: 2025-11-23 20:39:11.462621986 +0000 UTC m=+0.041261342 container create 35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5 (image=quay.io/ceph/ceph:v19, name=confident_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:11 np0005532761 systemd[1]: Started libpod-conmon-35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5.scope.
Nov 23 15:39:11 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d052bbfd2cb1966394e14894f908f98959c3db08f8da53f32eacb565d1de1cc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d052bbfd2cb1966394e14894f908f98959c3db08f8da53f32eacb565d1de1cc8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d052bbfd2cb1966394e14894f908f98959c3db08f8da53f32eacb565d1de1cc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:11 np0005532761 podman[75309]: 2025-11-23 20:39:11.520175105 +0000 UTC m=+0.098814501 container init 35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5 (image=quay.io/ceph/ceph:v19, name=confident_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:39:11 np0005532761 podman[75309]: 2025-11-23 20:39:11.527227727 +0000 UTC m=+0.105867093 container start 35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5 (image=quay.io/ceph/ceph:v19, name=confident_jang, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:11 np0005532761 podman[75309]: 2025-11-23 20:39:11.533445489 +0000 UTC m=+0.112084875 container attach 35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5 (image=quay.io/ceph/ceph:v19, name=confident_jang, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:11 np0005532761 podman[75309]: 2025-11-23 20:39:11.446350137 +0000 UTC m=+0.024989503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'balancer'
Nov 23 15:39:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:11.544+0000 7fd1a7fc6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:39:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'cephadm'
Nov 23 15:39:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:11.624+0000 7fd1a7fc6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:39:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 15:39:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2769739522' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 15:39:11 np0005532761 confident_jang[75345]: {
Nov 23 15:39:11 np0005532761 confident_jang[75345]:    "epoch": 5,
Nov 23 15:39:11 np0005532761 confident_jang[75345]:    "available": true,
Nov 23 15:39:11 np0005532761 confident_jang[75345]:    "active_name": "compute-0.oyehye",
Nov 23 15:39:11 np0005532761 confident_jang[75345]:    "num_standby": 0
Nov 23 15:39:11 np0005532761 confident_jang[75345]: }
Nov 23 15:39:11 np0005532761 systemd[1]: libpod-35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5.scope: Deactivated successfully.
Nov 23 15:39:11 np0005532761 podman[75371]: 2025-11-23 20:39:11.972036068 +0000 UTC m=+0.021723804 container died 35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5 (image=quay.io/ceph/ceph:v19, name=confident_jang, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:11 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d052bbfd2cb1966394e14894f908f98959c3db08f8da53f32eacb565d1de1cc8-merged.mount: Deactivated successfully.
Nov 23 15:39:12 np0005532761 podman[75371]: 2025-11-23 20:39:12.007000954 +0000 UTC m=+0.056688690 container remove 35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5 (image=quay.io/ceph/ceph:v19, name=confident_jang, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:39:12 np0005532761 systemd[1]: libpod-conmon-35e10d7d9b5bf587c40bbfc1ea0cbdde374a6009e8e7fe69a7554591b9b108a5.scope: Deactivated successfully.
Nov 23 15:39:12 np0005532761 podman[75391]: 2025-11-23 20:39:12.063747592 +0000 UTC m=+0.035062819 container create 6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2 (image=quay.io/ceph/ceph:v19, name=naughty_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:12 np0005532761 systemd[1]: Started libpod-conmon-6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2.scope.
Nov 23 15:39:12 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4898d4b54226705ba734c562372d9cfaffcf0dcae5a3597e27a0b79e42cad5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4898d4b54226705ba734c562372d9cfaffcf0dcae5a3597e27a0b79e42cad5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4898d4b54226705ba734c562372d9cfaffcf0dcae5a3597e27a0b79e42cad5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:12 np0005532761 podman[75391]: 2025-11-23 20:39:12.142524821 +0000 UTC m=+0.113840098 container init 6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2 (image=quay.io/ceph/ceph:v19, name=naughty_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:39:12 np0005532761 podman[75391]: 2025-11-23 20:39:12.047161726 +0000 UTC m=+0.018476973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:12 np0005532761 podman[75391]: 2025-11-23 20:39:12.151119312 +0000 UTC m=+0.122434539 container start 6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2 (image=quay.io/ceph/ceph:v19, name=naughty_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 15:39:12 np0005532761 podman[75391]: 2025-11-23 20:39:12.156249577 +0000 UTC m=+0.127564894 container attach 6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2 (image=quay.io/ceph/ceph:v19, name=naughty_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:39:12 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3326957814' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 23 15:39:12 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'crash'
Nov 23 15:39:12 np0005532761 ceph-mgr[74869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:39:12 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'dashboard'
Nov 23 15:39:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:12.472+0000 7fd1a7fc6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:39:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:13.086+0000 7fd1a7fc6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:39:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:39:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  from numpy import show_config as show_numpy_config
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'influx'
Nov 23 15:39:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:13.253+0000 7fd1a7fc6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'insights'
Nov 23 15:39:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:13.326+0000 7fd1a7fc6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'iostat'
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:39:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:13.463+0000 7fd1a7fc6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'localpool'
Nov 23 15:39:13 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mirroring'
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'nfs'
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:39:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:14.481+0000 7fd1a7fc6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:39:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:14.693+0000 7fd1a7fc6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_support'
Nov 23 15:39:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:14.772+0000 7fd1a7fc6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:39:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:14.837+0000 7fd1a7fc6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'progress'
Nov 23 15:39:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:14.912+0000 7fd1a7fc6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:39:14 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'prometheus'
Nov 23 15:39:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:14.982+0000 7fd1a7fc6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:39:15 np0005532761 ceph-mgr[74869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:39:15 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:39:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:15.323+0000 7fd1a7fc6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:39:15 np0005532761 ceph-mgr[74869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:39:15 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'restful'
Nov 23 15:39:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:15.423+0000 7fd1a7fc6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:39:15 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rgw'
Nov 23 15:39:15 np0005532761 ceph-mgr[74869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:39:15 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rook'
Nov 23 15:39:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:15.859+0000 7fd1a7fc6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'selftest'
Nov 23 15:39:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:16.424+0000 7fd1a7fc6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:16.499+0000 7fd1a7fc6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'stats'
Nov 23 15:39:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:16.583+0000 7fd1a7fc6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'status'
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telegraf'
Nov 23 15:39:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:16.742+0000 7fd1a7fc6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telemetry'
Nov 23 15:39:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:16.814+0000 7fd1a7fc6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:39:16 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:39:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:16.968+0000 7fd1a7fc6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'volumes'
Nov 23 15:39:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:17.186+0000 7fd1a7fc6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'zabbix'
Nov 23 15:39:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:17.458+0000 7fd1a7fc6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:39:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:39:17.530+0000 7fd1a7fc6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Active manager daemon compute-0.oyehye restarted
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.oyehye
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: ms_deliver_dispatch: unhandled message 0x556047b48d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map Activating!
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map I am now activating
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.oyehye(active, starting, since 0.0863428s)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e1 all = 1
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: balancer
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] Starting
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Manager daemon compute-0.oyehye is now available
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:39:17
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] No pools available
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: Activating manager daemon compute-0.oyehye
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: Manager daemon compute-0.oyehye is now available
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: cephadm
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: crash
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: devicehealth
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: iostat
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Starting
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: nfs
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: orchestrator
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: pg_autoscaler
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: progress
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [progress INFO root] Loading...
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [progress INFO root] No stored events to load
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded [] historic events
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded OSDMap, ready.
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] recovery thread starting
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] starting setup
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: rbd_support
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: restful
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [restful INFO root] server_addr: :: server_port: 8003
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: status
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [restful WARNING root] server not running: no certificate configured
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: telemetry
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] PerfHandler: starting
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TaskHandler: starting
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"} v 0)
Nov 23 15:39:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] setup complete
Nov 23 15:39:17 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: volumes
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.oyehye(active, since 1.09364s)
Nov 23 15:39:18 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 23 15:39:18 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 23 15:39:18 np0005532761 naughty_ardinghelli[75415]: {
Nov 23 15:39:18 np0005532761 naughty_ardinghelli[75415]:    "mgrmap_epoch": 7,
Nov 23 15:39:18 np0005532761 naughty_ardinghelli[75415]:    "initialized": true
Nov 23 15:39:18 np0005532761 naughty_ardinghelli[75415]: }
Nov 23 15:39:18 np0005532761 systemd[1]: libpod-6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2.scope: Deactivated successfully.
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: Found migration_current of "None". Setting to last migration.
Nov 23 15:39:18 np0005532761 podman[75391]: 2025-11-23 20:39:18.669970928 +0000 UTC m=+6.641286155 container died 6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2 (image=quay.io/ceph/ceph:v19, name=naughty_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:18 np0005532761 systemd[1]: var-lib-containers-storage-overlay-6b4898d4b54226705ba734c562372d9cfaffcf0dcae5a3597e27a0b79e42cad5-merged.mount: Deactivated successfully.
Nov 23 15:39:18 np0005532761 podman[75391]: 2025-11-23 20:39:18.71289125 +0000 UTC m=+6.684206477 container remove 6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2 (image=quay.io/ceph/ceph:v19, name=naughty_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 15:39:18 np0005532761 systemd[1]: libpod-conmon-6a49bfc49456a65107cff02ec6acb0ee88290e591d94775841d2be79e5e01af2.scope: Deactivated successfully.
Nov 23 15:39:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019924946 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:18 np0005532761 podman[75566]: 2025-11-23 20:39:18.776022225 +0000 UTC m=+0.043535817 container create 701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8 (image=quay.io/ceph/ceph:v19, name=recursing_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 15:39:18 np0005532761 systemd[1]: Started libpod-conmon-701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8.scope.
Nov 23 15:39:18 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fce385784e958b6941c2951bb859afadd27c27f663218a61538b3493e85cf01/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fce385784e958b6941c2951bb859afadd27c27f663218a61538b3493e85cf01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fce385784e958b6941c2951bb859afadd27c27f663218a61538b3493e85cf01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:18 np0005532761 podman[75566]: 2025-11-23 20:39:18.755272217 +0000 UTC m=+0.022785819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:18 np0005532761 podman[75566]: 2025-11-23 20:39:18.867522776 +0000 UTC m=+0.135036428 container init 701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8 (image=quay.io/ceph/ceph:v19, name=recursing_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:18 np0005532761 podman[75566]: 2025-11-23 20:39:18.874579079 +0000 UTC m=+0.142092671 container start 701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8 (image=quay.io/ceph/ceph:v19, name=recursing_meitner, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:39:18 np0005532761 podman[75566]: 2025-11-23 20:39:18.879519559 +0000 UTC m=+0.147033201 container attach 701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8 (image=quay.io/ceph/ceph:v19, name=recursing_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 23 15:39:19 np0005532761 systemd[1]: libpod-701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8.scope: Deactivated successfully.
Nov 23 15:39:19 np0005532761 podman[75566]: 2025-11-23 20:39:19.255866223 +0000 UTC m=+0.523379835 container died 701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8 (image=quay.io/ceph/ceph:v19, name=recursing_meitner, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:19 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1fce385784e958b6941c2951bb859afadd27c27f663218a61538b3493e85cf01-merged.mount: Deactivated successfully.
Nov 23 15:39:19 np0005532761 podman[75566]: 2025-11-23 20:39:19.310565842 +0000 UTC m=+0.578079464 container remove 701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8 (image=quay.io/ceph/ceph:v19, name=recursing_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:39:19 np0005532761 systemd[1]: libpod-conmon-701ef90b6e8325c6e989541fe0d9d4d0f67fc23268c620f3a5142a3dd3cd14a8.scope: Deactivated successfully.
Nov 23 15:39:19 np0005532761 podman[75622]: 2025-11-23 20:39:19.380225348 +0000 UTC m=+0.044918161 container create f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412 (image=quay.io/ceph/ceph:v19, name=practical_tharp, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 15:39:19 np0005532761 systemd[1]: Started libpod-conmon-f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412.scope.
Nov 23 15:39:19 np0005532761 podman[75622]: 2025-11-23 20:39:19.358303911 +0000 UTC m=+0.022996724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:19 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:19 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9125e55beb226700225e7c2f58b6dd8e9478b9039ca4da607708bd0f89e75be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:19 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9125e55beb226700225e7c2f58b6dd8e9478b9039ca4da607708bd0f89e75be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:19 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9125e55beb226700225e7c2f58b6dd8e9478b9039ca4da607708bd0f89e75be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:19 np0005532761 podman[75622]: 2025-11-23 20:39:19.487303969 +0000 UTC m=+0.151996752 container init f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412 (image=quay.io/ceph/ceph:v19, name=practical_tharp, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:19 np0005532761 podman[75622]: 2025-11-23 20:39:19.494105886 +0000 UTC m=+0.158798659 container start f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412 (image=quay.io/ceph/ceph:v19, name=practical_tharp, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:19 np0005532761 podman[75622]: 2025-11-23 20:39:19.497652763 +0000 UTC m=+0.162345566 container attach f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412 (image=quay.io/ceph/ceph:v19, name=practical_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:39:19] ENGINE Bus STARTING
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:39:19] ENGINE Bus STARTING
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:39:19] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:39:19] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Set ssh ssh_user
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Set ssh ssh_config
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 23 15:39:19 np0005532761 practical_tharp[75638]: ssh user set to ceph-admin. sudo will be used
Nov 23 15:39:19 np0005532761 systemd[1]: libpod-f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412.scope: Deactivated successfully.
Nov 23 15:39:19 np0005532761 podman[75622]: 2025-11-23 20:39:19.876088018 +0000 UTC m=+0.540780791 container died f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412 (image=quay.io/ceph/ceph:v19, name=practical_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:19 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c9125e55beb226700225e7c2f58b6dd8e9478b9039ca4da607708bd0f89e75be-merged.mount: Deactivated successfully.
Nov 23 15:39:19 np0005532761 podman[75622]: 2025-11-23 20:39:19.910602943 +0000 UTC m=+0.575295716 container remove f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412 (image=quay.io/ceph/ceph:v19, name=practical_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:39:19 np0005532761 systemd[1]: libpod-conmon-f66afbb4f2e66d0f1d2506a64098cc1383360fc9cd7db5177a4abe7842f7a412.scope: Deactivated successfully.
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:39:19] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:39:19] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:39:19] ENGINE Bus STARTED
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:39:19] ENGINE Bus STARTED
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 23 15:39:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:39:19] ENGINE Client ('192.168.122.100', 36400) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:39:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:39:19] ENGINE Client ('192.168.122.100', 36400) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:39:19 np0005532761 podman[75697]: 2025-11-23 20:39:19.969511046 +0000 UTC m=+0.041881877 container create 209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb (image=quay.io/ceph/ceph:v19, name=gracious_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 15:39:20 np0005532761 systemd[1]: Started libpod-conmon-209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb.scope.
Nov 23 15:39:20 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2c8f5272607651fecb25a779e135460c15aed979b00dd98c6dc8077a40544/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2c8f5272607651fecb25a779e135460c15aed979b00dd98c6dc8077a40544/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2c8f5272607651fecb25a779e135460c15aed979b00dd98c6dc8077a40544/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2c8f5272607651fecb25a779e135460c15aed979b00dd98c6dc8077a40544/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2c8f5272607651fecb25a779e135460c15aed979b00dd98c6dc8077a40544/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 podman[75697]: 2025-11-23 20:39:19.948740147 +0000 UTC m=+0.021110998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:20 np0005532761 podman[75697]: 2025-11-23 20:39:20.053458941 +0000 UTC m=+0.125829792 container init 209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb (image=quay.io/ceph/ceph:v19, name=gracious_cori, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:39:20 np0005532761 podman[75697]: 2025-11-23 20:39:20.060748789 +0000 UTC m=+0.133119640 container start 209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb (image=quay.io/ceph/ceph:v19, name=gracious_cori, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:20 np0005532761 podman[75697]: 2025-11-23 20:39:20.064528291 +0000 UTC m=+0.136899132 container attach 209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb (image=quay.io/ceph/ceph:v19, name=gracious_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.oyehye(active, since 2s)
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Set ssh private key
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 23 15:39:20 np0005532761 systemd[1]: libpod-209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb.scope: Deactivated successfully.
Nov 23 15:39:20 np0005532761 podman[75739]: 2025-11-23 20:39:20.457790979 +0000 UTC m=+0.021855185 container died 209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb (image=quay.io/ceph/ceph:v19, name=gracious_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 15:39:20 np0005532761 systemd[1]: var-lib-containers-storage-overlay-89d2c8f5272607651fecb25a779e135460c15aed979b00dd98c6dc8077a40544-merged.mount: Deactivated successfully.
Nov 23 15:39:20 np0005532761 podman[75739]: 2025-11-23 20:39:20.488883991 +0000 UTC m=+0.052948177 container remove 209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb (image=quay.io/ceph/ceph:v19, name=gracious_cori, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:20 np0005532761 systemd[1]: libpod-conmon-209de68d0281059f9dde7b82ddf7f8381a4f276d7b404f8314adfe1705716dfb.scope: Deactivated successfully.
Nov 23 15:39:20 np0005532761 podman[75754]: 2025-11-23 20:39:20.548643644 +0000 UTC m=+0.038689998 container create a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71 (image=quay.io/ceph/ceph:v19, name=eloquent_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:20 np0005532761 systemd[1]: Started libpod-conmon-a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71.scope.
Nov 23 15:39:20 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1d4c072f3e3b70556717d01d0a5ba692bc1d293d446fcf1ed4c5092baba4c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1d4c072f3e3b70556717d01d0a5ba692bc1d293d446fcf1ed4c5092baba4c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1d4c072f3e3b70556717d01d0a5ba692bc1d293d446fcf1ed4c5092baba4c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1d4c072f3e3b70556717d01d0a5ba692bc1d293d446fcf1ed4c5092baba4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1d4c072f3e3b70556717d01d0a5ba692bc1d293d446fcf1ed4c5092baba4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:20 np0005532761 podman[75754]: 2025-11-23 20:39:20.604487141 +0000 UTC m=+0.094533525 container init a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71 (image=quay.io/ceph/ceph:v19, name=eloquent_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:20 np0005532761 podman[75754]: 2025-11-23 20:39:20.611427791 +0000 UTC m=+0.101474145 container start a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71 (image=quay.io/ceph/ceph:v19, name=eloquent_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:20 np0005532761 podman[75754]: 2025-11-23 20:39:20.615920511 +0000 UTC m=+0.105966895 container attach a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71 (image=quay.io/ceph/ceph:v19, name=eloquent_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:20 np0005532761 podman[75754]: 2025-11-23 20:39:20.531683139 +0000 UTC m=+0.021729513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Nov 23 15:39:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 23 15:39:20 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 23 15:39:20 np0005532761 systemd[1]: libpod-a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71.scope: Deactivated successfully.
Nov 23 15:39:20 np0005532761 conmon[75770]: conmon a68e3bb712abb77a26ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71.scope/container/memory.events
Nov 23 15:39:20 np0005532761 podman[75754]: 2025-11-23 20:39:20.961100972 +0000 UTC m=+0.451147386 container died a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71 (image=quay.io/ceph/ceph:v19, name=eloquent_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 23 15:39:20 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f6f1d4c072f3e3b70556717d01d0a5ba692bc1d293d446fcf1ed4c5092baba4c-merged.mount: Deactivated successfully.
Nov 23 15:39:21 np0005532761 podman[75754]: 2025-11-23 20:39:21.003892079 +0000 UTC m=+0.493938433 container remove a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71 (image=quay.io/ceph/ceph:v19, name=eloquent_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 15:39:21 np0005532761 systemd[1]: libpod-conmon-a68e3bb712abb77a26ad850f94655b1a23777f1e245adffd40ad5aa2e15baa71.scope: Deactivated successfully.
Nov 23 15:39:21 np0005532761 podman[75811]: 2025-11-23 20:39:21.065670593 +0000 UTC m=+0.043246330 container create 22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca (image=quay.io/ceph/ceph:v19, name=admiring_euclid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 15:39:21 np0005532761 systemd[1]: Started libpod-conmon-22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca.scope.
Nov 23 15:39:21 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb1e2a8f404cc2df3c548d384c98440d06a36bac476dc3990dd213f83f4be5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb1e2a8f404cc2df3c548d384c98440d06a36bac476dc3990dd213f83f4be5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fb1e2a8f404cc2df3c548d384c98440d06a36bac476dc3990dd213f83f4be5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:21 np0005532761 podman[75811]: 2025-11-23 20:39:21.045531459 +0000 UTC m=+0.023107226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:21 np0005532761 podman[75811]: 2025-11-23 20:39:21.141055537 +0000 UTC m=+0.118632174 container init 22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca (image=quay.io/ceph/ceph:v19, name=admiring_euclid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:39:21 np0005532761 podman[75811]: 2025-11-23 20:39:21.145952858 +0000 UTC m=+0.123528585 container start 22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca (image=quay.io/ceph/ceph:v19, name=admiring_euclid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:39:21 np0005532761 podman[75811]: 2025-11-23 20:39:21.150884388 +0000 UTC m=+0.128460145 container attach 22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca (image=quay.io/ceph/ceph:v19, name=admiring_euclid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:39:19] ENGINE Bus STARTING
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:39:19] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: Set ssh ssh_user
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: Set ssh ssh_config
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: ssh user set to ceph-admin. sudo will be used
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:39:19] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:39:19] ENGINE Bus STARTED
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:39:19] ENGINE Client ('192.168.122.100', 36400) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: Set ssh ssh_identity_key
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: Set ssh private key
Nov 23 15:39:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:21 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:21 np0005532761 admiring_euclid[75827]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPzZBY2C8y7UA1hs1DGwqzP5aZbr/afXRGfampcsSfOIficd9VmxapptoA9QobYVU5XhQLz6xM6Fswzwl3lgtspO76vJrPaFPItgcqYbxviAO0918TXtlmo03vh2TA2CXQG42C69++kvk0Gw39h4JWhXuBgsLlB7fbks3MNLTPFjY84HuMld4cXGun6aks6c76qlPjto4yW2egxGK1igZb83RFpj6M3xBfk6jDyB4nAPgQrQoaqZksY0dnIeGInvee+D24iVyr696Ixp55Fxf9bmXdqkWDLdXbLQ0itxMFRTk9iZBYMUFPwqQEwyXW+Y/2lufGNbS+tGWN73b4+3EnaSSbgGUFDNlfnkXJqK1UKBUsvHzxf4pPZ3abwwDda2hNbuAnS+615Cc7STMxdawCDyS3SZ4WTRHHeRcCjdJsl99UQhrTZtzjpb5t52rHDZGfX0HdP6CGm3Renu+TalA0Ksj4nxhsjp0SnAKitwkDD9SbgVNSGYnvWKHp3fa+hr0= zuul@controller
Nov 23 15:39:21 np0005532761 systemd[1]: libpod-22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca.scope: Deactivated successfully.
Nov 23 15:39:21 np0005532761 podman[75811]: 2025-11-23 20:39:21.504628229 +0000 UTC m=+0.482203956 container died 22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca (image=quay.io/ceph/ceph:v19, name=admiring_euclid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 15:39:21 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f4fb1e2a8f404cc2df3c548d384c98440d06a36bac476dc3990dd213f83f4be5-merged.mount: Deactivated successfully.
Nov 23 15:39:21 np0005532761 podman[75811]: 2025-11-23 20:39:21.548079553 +0000 UTC m=+0.525655280 container remove 22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca (image=quay.io/ceph/ceph:v19, name=admiring_euclid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:21 np0005532761 systemd[1]: libpod-conmon-22e40feeea65955689b32bd420f8c6907656961027bb04fc26d298f8f2634dca.scope: Deactivated successfully.
Nov 23 15:39:21 np0005532761 podman[75864]: 2025-11-23 20:39:21.605416197 +0000 UTC m=+0.038823222 container create 8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a (image=quay.io/ceph/ceph:v19, name=gracious_williams, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:39:21 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:21 np0005532761 systemd[1]: Started libpod-conmon-8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a.scope.
Nov 23 15:39:21 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921766cbf322278ce1da6494bb5db84183959a7a2efb26a07daa68c6facd7315/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921766cbf322278ce1da6494bb5db84183959a7a2efb26a07daa68c6facd7315/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921766cbf322278ce1da6494bb5db84183959a7a2efb26a07daa68c6facd7315/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:21 np0005532761 podman[75864]: 2025-11-23 20:39:21.679266305 +0000 UTC m=+0.112673350 container init 8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a (image=quay.io/ceph/ceph:v19, name=gracious_williams, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:21 np0005532761 podman[75864]: 2025-11-23 20:39:21.588514693 +0000 UTC m=+0.021921738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:21 np0005532761 podman[75864]: 2025-11-23 20:39:21.684511323 +0000 UTC m=+0.117918338 container start 8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a (image=quay.io/ceph/ceph:v19, name=gracious_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:39:21 np0005532761 podman[75864]: 2025-11-23 20:39:21.688239954 +0000 UTC m=+0.121646969 container attach 8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a (image=quay.io/ceph/ceph:v19, name=gracious_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:22 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:22 np0005532761 systemd[1]: Created slice User Slice of UID 42477.
Nov 23 15:39:22 np0005532761 ceph-mon[74569]: Set ssh ssh_identity_pub
Nov 23 15:39:22 np0005532761 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 23 15:39:22 np0005532761 systemd-logind[820]: New session 21 of user ceph-admin.
Nov 23 15:39:22 np0005532761 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 23 15:39:22 np0005532761 systemd[1]: Starting User Manager for UID 42477...
Nov 23 15:39:22 np0005532761 systemd[75910]: Queued start job for default target Main User Target.
Nov 23 15:39:22 np0005532761 systemd[75910]: Created slice User Application Slice.
Nov 23 15:39:22 np0005532761 systemd[75910]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:39:22 np0005532761 systemd[75910]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:39:22 np0005532761 systemd[75910]: Reached target Paths.
Nov 23 15:39:22 np0005532761 systemd[75910]: Reached target Timers.
Nov 23 15:39:22 np0005532761 systemd[75910]: Starting D-Bus User Message Bus Socket...
Nov 23 15:39:22 np0005532761 systemd[75910]: Starting Create User's Volatile Files and Directories...
Nov 23 15:39:22 np0005532761 systemd[75910]: Finished Create User's Volatile Files and Directories.
Nov 23 15:39:22 np0005532761 systemd[75910]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:39:22 np0005532761 systemd[75910]: Reached target Sockets.
Nov 23 15:39:22 np0005532761 systemd[75910]: Reached target Basic System.
Nov 23 15:39:22 np0005532761 systemd[75910]: Reached target Main User Target.
Nov 23 15:39:22 np0005532761 systemd[75910]: Startup finished in 121ms.
Nov 23 15:39:22 np0005532761 systemd[1]: Started User Manager for UID 42477.
Nov 23 15:39:22 np0005532761 systemd[1]: Started Session 21 of User ceph-admin.
Nov 23 15:39:22 np0005532761 systemd-logind[820]: New session 23 of user ceph-admin.
Nov 23 15:39:22 np0005532761 systemd[1]: Started Session 23 of User ceph-admin.
Nov 23 15:39:22 np0005532761 systemd-logind[820]: New session 24 of user ceph-admin.
Nov 23 15:39:22 np0005532761 systemd[1]: Started Session 24 of User ceph-admin.
Nov 23 15:39:23 np0005532761 systemd-logind[820]: New session 25 of user ceph-admin.
Nov 23 15:39:23 np0005532761 systemd[1]: Started Session 25 of User ceph-admin.
Nov 23 15:39:23 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 23 15:39:23 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 23 15:39:23 np0005532761 systemd-logind[820]: New session 26 of user ceph-admin.
Nov 23 15:39:23 np0005532761 systemd[1]: Started Session 26 of User ceph-admin.
Nov 23 15:39:23 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053057 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:23 np0005532761 systemd-logind[820]: New session 27 of user ceph-admin.
Nov 23 15:39:23 np0005532761 systemd[1]: Started Session 27 of User ceph-admin.
Nov 23 15:39:24 np0005532761 systemd-logind[820]: New session 28 of user ceph-admin.
Nov 23 15:39:24 np0005532761 systemd[1]: Started Session 28 of User ceph-admin.
Nov 23 15:39:24 np0005532761 ceph-mon[74569]: Deploying cephadm binary to compute-0
Nov 23 15:39:24 np0005532761 systemd-logind[820]: New session 29 of user ceph-admin.
Nov 23 15:39:24 np0005532761 systemd[1]: Started Session 29 of User ceph-admin.
Nov 23 15:39:24 np0005532761 systemd-logind[820]: New session 30 of user ceph-admin.
Nov 23 15:39:24 np0005532761 systemd[1]: Started Session 30 of User ceph-admin.
Nov 23 15:39:25 np0005532761 systemd-logind[820]: New session 31 of user ceph-admin.
Nov 23 15:39:25 np0005532761 systemd[1]: Started Session 31 of User ceph-admin.
Nov 23 15:39:25 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:26 np0005532761 systemd-logind[820]: New session 32 of user ceph-admin.
Nov 23 15:39:26 np0005532761 systemd[1]: Started Session 32 of User ceph-admin.
Nov 23 15:39:26 np0005532761 systemd-logind[820]: New session 33 of user ceph-admin.
Nov 23 15:39:26 np0005532761 systemd[1]: Started Session 33 of User ceph-admin.
Nov 23 15:39:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:26 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Added host compute-0
Nov 23 15:39:26 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 23 15:39:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 23 15:39:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 23 15:39:26 np0005532761 gracious_williams[75880]: Added host 'compute-0' with addr '192.168.122.100'
Nov 23 15:39:26 np0005532761 systemd[1]: libpod-8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a.scope: Deactivated successfully.
Nov 23 15:39:26 np0005532761 podman[76277]: 2025-11-23 20:39:26.93056952 +0000 UTC m=+0.026268173 container died 8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a (image=quay.io/ceph/ceph:v19, name=gracious_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:39:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-921766cbf322278ce1da6494bb5db84183959a7a2efb26a07daa68c6facd7315-merged.mount: Deactivated successfully.
Nov 23 15:39:26 np0005532761 podman[76277]: 2025-11-23 20:39:26.967737231 +0000 UTC m=+0.063435884 container remove 8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a (image=quay.io/ceph/ceph:v19, name=gracious_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:39:26 np0005532761 systemd[1]: libpod-conmon-8acec8adfc4dbe369c7a07b3f6f14cb7ca1d73c101d4678a1a0c18022612b23a.scope: Deactivated successfully.
Nov 23 15:39:27 np0005532761 podman[76330]: 2025-11-23 20:39:27.032977207 +0000 UTC m=+0.040153813 container create ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9 (image=quay.io/ceph/ceph:v19, name=practical_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:27 np0005532761 systemd[1]: Started libpod-conmon-ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9.scope.
Nov 23 15:39:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1747ff7cb5e731fdb1c7e118008067093e8a898440617e9cc823fdf53feef7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1747ff7cb5e731fdb1c7e118008067093e8a898440617e9cc823fdf53feef7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1747ff7cb5e731fdb1c7e118008067093e8a898440617e9cc823fdf53feef7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:27 np0005532761 podman[76330]: 2025-11-23 20:39:27.106502008 +0000 UTC m=+0.113678614 container init ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9 (image=quay.io/ceph/ceph:v19, name=practical_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:39:27 np0005532761 podman[76330]: 2025-11-23 20:39:27.015681104 +0000 UTC m=+0.022857730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:27 np0005532761 podman[76330]: 2025-11-23 20:39:27.116283097 +0000 UTC m=+0.123459703 container start ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9 (image=quay.io/ceph/ceph:v19, name=practical_ritchie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:27 np0005532761 podman[76330]: 2025-11-23 20:39:27.119329282 +0000 UTC m=+0.126505888 container attach ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9 (image=quay.io/ceph/ceph:v19, name=practical_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:27 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:27 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 23 15:39:27 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 23 15:39:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 15:39:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:27 np0005532761 practical_ritchie[76346]: Scheduled mon update...
Nov 23 15:39:27 np0005532761 systemd[1]: libpod-ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9.scope: Deactivated successfully.
Nov 23 15:39:27 np0005532761 podman[76398]: 2025-11-23 20:39:27.53384838 +0000 UTC m=+0.022464181 container died ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9 (image=quay.io/ceph/ceph:v19, name=practical_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:39:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1e1747ff7cb5e731fdb1c7e118008067093e8a898440617e9cc823fdf53feef7-merged.mount: Deactivated successfully.
Nov 23 15:39:27 np0005532761 podman[76398]: 2025-11-23 20:39:27.571333598 +0000 UTC m=+0.059949389 container remove ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9 (image=quay.io/ceph/ceph:v19, name=practical_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:27 np0005532761 systemd[1]: libpod-conmon-ecfad1d8f8db93a2a5d566c7d5f20ad9c862ba57cb2a6dd459637467184676b9.scope: Deactivated successfully.
Nov 23 15:39:27 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:27 np0005532761 podman[76413]: 2025-11-23 20:39:27.634792532 +0000 UTC m=+0.040182385 container create 52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1 (image=quay.io/ceph/ceph:v19, name=cranky_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 15:39:27 np0005532761 podman[76363]: 2025-11-23 20:39:27.654219047 +0000 UTC m=+0.449035395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:27 np0005532761 systemd[1]: Started libpod-conmon-52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1.scope.
Nov 23 15:39:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa91cef95b43b42bcb527ade82f34f081bd41345016543bc6080414a30ac276/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa91cef95b43b42bcb527ade82f34f081bd41345016543bc6080414a30ac276/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa91cef95b43b42bcb527ade82f34f081bd41345016543bc6080414a30ac276/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:27 np0005532761 podman[76413]: 2025-11-23 20:39:27.701755131 +0000 UTC m=+0.107145014 container init 52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1 (image=quay.io/ceph/ceph:v19, name=cranky_murdock, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:27 np0005532761 podman[76413]: 2025-11-23 20:39:27.70905458 +0000 UTC m=+0.114444473 container start 52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1 (image=quay.io/ceph/ceph:v19, name=cranky_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:27 np0005532761 podman[76413]: 2025-11-23 20:39:27.614892845 +0000 UTC m=+0.020282738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:27 np0005532761 podman[76413]: 2025-11-23 20:39:27.713047048 +0000 UTC m=+0.118436931 container attach 52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1 (image=quay.io/ceph/ceph:v19, name=cranky_murdock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 15:39:27 np0005532761 podman[76447]: 2025-11-23 20:39:27.758742147 +0000 UTC m=+0.041701992 container create 53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49 (image=quay.io/ceph/ceph:v19, name=hungry_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 15:39:27 np0005532761 systemd[1]: Started libpod-conmon-53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49.scope.
Nov 23 15:39:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:27 np0005532761 podman[76447]: 2025-11-23 20:39:27.809849437 +0000 UTC m=+0.092809302 container init 53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49 (image=quay.io/ceph/ceph:v19, name=hungry_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:27 np0005532761 podman[76447]: 2025-11-23 20:39:27.817328321 +0000 UTC m=+0.100288166 container start 53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49 (image=quay.io/ceph/ceph:v19, name=hungry_fermi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:39:27 np0005532761 podman[76447]: 2025-11-23 20:39:27.822242581 +0000 UTC m=+0.105202446 container attach 53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49 (image=quay.io/ceph/ceph:v19, name=hungry_fermi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:39:27 np0005532761 podman[76447]: 2025-11-23 20:39:27.740648914 +0000 UTC m=+0.023608789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:27 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:27 np0005532761 ceph-mon[74569]: Added host compute-0
Nov 23 15:39:27 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:27 np0005532761 hungry_fermi[76464]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 23 15:39:27 np0005532761 systemd[1]: libpod-53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49.scope: Deactivated successfully.
Nov 23 15:39:27 np0005532761 podman[76447]: 2025-11-23 20:39:27.912263825 +0000 UTC m=+0.195223660 container died 53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49 (image=quay.io/ceph/ceph:v19, name=hungry_fermi, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:27 np0005532761 podman[76447]: 2025-11-23 20:39:27.947087887 +0000 UTC m=+0.230047742 container remove 53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49 (image=quay.io/ceph/ceph:v19, name=hungry_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:39:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-918646529e909fa783caffcc58f7177c2d166e88df4e5f4e7d7ae791d68566b1-merged.mount: Deactivated successfully.
Nov 23 15:39:27 np0005532761 systemd[1]: libpod-conmon-53c2b27145640fd3c5501da93da181df7dc554cb8033be4ecfc82580a0f64e49.scope: Deactivated successfully.
Nov 23 15:39:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:28 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:28 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 23 15:39:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:28 np0005532761 cranky_murdock[76437]: Scheduled mgr update...
Nov 23 15:39:28 np0005532761 systemd[1]: libpod-52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1.scope: Deactivated successfully.
Nov 23 15:39:28 np0005532761 podman[76413]: 2025-11-23 20:39:28.089147216 +0000 UTC m=+0.494537069 container died 52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1 (image=quay.io/ceph/ceph:v19, name=cranky_murdock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3aa91cef95b43b42bcb527ade82f34f081bd41345016543bc6080414a30ac276-merged.mount: Deactivated successfully.
Nov 23 15:39:28 np0005532761 podman[76413]: 2025-11-23 20:39:28.132785254 +0000 UTC m=+0.538175107 container remove 52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1 (image=quay.io/ceph/ceph:v19, name=cranky_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:28 np0005532761 systemd[1]: libpod-conmon-52b393f231731ef3308a2f3ab1a7c43b900fdcbdebb62b245816a6a2593281b1.scope: Deactivated successfully.
Nov 23 15:39:28 np0005532761 podman[76562]: 2025-11-23 20:39:28.185823943 +0000 UTC m=+0.035065080 container create f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a (image=quay.io/ceph/ceph:v19, name=practical_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 15:39:28 np0005532761 systemd[1]: Started libpod-conmon-f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a.scope.
Nov 23 15:39:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/859f44446133bc842b75a248c75aa209159580aab7189ea7b3915aaa28c0032e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/859f44446133bc842b75a248c75aa209159580aab7189ea7b3915aaa28c0032e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/859f44446133bc842b75a248c75aa209159580aab7189ea7b3915aaa28c0032e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:28 np0005532761 podman[76562]: 2025-11-23 20:39:28.260387358 +0000 UTC m=+0.109628525 container init f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a (image=quay.io/ceph/ceph:v19, name=practical_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:39:28 np0005532761 podman[76562]: 2025-11-23 20:39:28.265418282 +0000 UTC m=+0.114659419 container start f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a (image=quay.io/ceph/ceph:v19, name=practical_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:28 np0005532761 podman[76562]: 2025-11-23 20:39:28.169680417 +0000 UTC m=+0.018921554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:28 np0005532761 podman[76562]: 2025-11-23 20:39:28.268415315 +0000 UTC m=+0.117656482 container attach f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a (image=quay.io/ceph/ceph:v19, name=practical_mestorf, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:28 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:28 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service crash spec with placement *
Nov 23 15:39:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:28 np0005532761 practical_mestorf[76579]: Scheduled crash update...
Nov 23 15:39:28 np0005532761 systemd[1]: libpod-f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a.scope: Deactivated successfully.
Nov 23 15:39:28 np0005532761 podman[76562]: 2025-11-23 20:39:28.65015531 +0000 UTC m=+0.499396447 container died f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a (image=quay.io/ceph/ceph:v19, name=practical_mestorf, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 15:39:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-859f44446133bc842b75a248c75aa209159580aab7189ea7b3915aaa28c0032e-merged.mount: Deactivated successfully.
Nov 23 15:39:28 np0005532761 podman[76562]: 2025-11-23 20:39:28.688770056 +0000 UTC m=+0.538011193 container remove f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a (image=quay.io/ceph/ceph:v19, name=practical_mestorf, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:28 np0005532761 systemd[1]: libpod-conmon-f9904a274d5bde76893f184baafcc169203c9f0a4be73456474812ac07a26f4a.scope: Deactivated successfully.
Nov 23 15:39:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:28 np0005532761 podman[76688]: 2025-11-23 20:39:28.748230801 +0000 UTC m=+0.041832984 container create cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc (image=quay.io/ceph/ceph:v19, name=elastic_hawking, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:39:28 np0005532761 systemd[1]: Started libpod-conmon-cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc.scope.
Nov 23 15:39:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2489949724fdbc36ddf4950223445e1c9829924718ac852f7146319efb3d60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2489949724fdbc36ddf4950223445e1c9829924718ac852f7146319efb3d60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2489949724fdbc36ddf4950223445e1c9829924718ac852f7146319efb3d60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:28 np0005532761 podman[76688]: 2025-11-23 20:39:28.72732637 +0000 UTC m=+0.020928573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:28 np0005532761 podman[76688]: 2025-11-23 20:39:28.824266624 +0000 UTC m=+0.117868837 container init cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc (image=quay.io/ceph/ceph:v19, name=elastic_hawking, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:28 np0005532761 podman[76688]: 2025-11-23 20:39:28.830135567 +0000 UTC m=+0.123737750 container start cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc (image=quay.io/ceph/ceph:v19, name=elastic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:39:28 np0005532761 podman[76688]: 2025-11-23 20:39:28.83312774 +0000 UTC m=+0.126729943 container attach cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc (image=quay.io/ceph/ceph:v19, name=elastic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: Saving service mon spec with placement count:5
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: Saving service mgr spec with placement count:2
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:29 np0005532761 podman[76803]: 2025-11-23 20:39:29.061628004 +0000 UTC m=+0.053407618 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:39:29 np0005532761 podman[76803]: 2025-11-23 20:39:29.155122643 +0000 UTC m=+0.146902267 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1636127195' entity='client.admin' 
Nov 23 15:39:29 np0005532761 systemd[1]: libpod-cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc.scope: Deactivated successfully.
Nov 23 15:39:29 np0005532761 podman[76688]: 2025-11-23 20:39:29.190759996 +0000 UTC m=+0.484362199 container died cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc (image=quay.io/ceph/ceph:v19, name=elastic_hawking, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-bf2489949724fdbc36ddf4950223445e1c9829924718ac852f7146319efb3d60-merged.mount: Deactivated successfully.
Nov 23 15:39:29 np0005532761 podman[76688]: 2025-11-23 20:39:29.231100493 +0000 UTC m=+0.524702676 container remove cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc (image=quay.io/ceph/ceph:v19, name=elastic_hawking, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:39:29 np0005532761 systemd[1]: libpod-conmon-cde7203587f1eb87ff46c6436d4012d72f07d4576ae8664ae51b410fb68f2bdc.scope: Deactivated successfully.
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:29 np0005532761 podman[76865]: 2025-11-23 20:39:29.294081545 +0000 UTC m=+0.040385609 container create 9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82 (image=quay.io/ceph/ceph:v19, name=hardcore_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:29 np0005532761 systemd[1]: Started libpod-conmon-9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82.scope.
Nov 23 15:39:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dafe44b1f113995567d9d7c56bd0226b69e96e1a79ac1d59fac6931606ecbba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dafe44b1f113995567d9d7c56bd0226b69e96e1a79ac1d59fac6931606ecbba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dafe44b1f113995567d9d7c56bd0226b69e96e1a79ac1d59fac6931606ecbba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:29 np0005532761 podman[76865]: 2025-11-23 20:39:29.370062576 +0000 UTC m=+0.116366660 container init 9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82 (image=quay.io/ceph/ceph:v19, name=hardcore_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:29 np0005532761 podman[76865]: 2025-11-23 20:39:29.275335637 +0000 UTC m=+0.021639721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:29 np0005532761 podman[76865]: 2025-11-23 20:39:29.375830517 +0000 UTC m=+0.122134581 container start 9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82 (image=quay.io/ceph/ceph:v19, name=hardcore_buck, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:39:29 np0005532761 podman[76865]: 2025-11-23 20:39:29.381549267 +0000 UTC m=+0.127853351 container attach 9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82 (image=quay.io/ceph/ceph:v19, name=hardcore_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:29 np0005532761 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76967 (sysctl)
Nov 23 15:39:29 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:29 np0005532761 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 23 15:39:29 np0005532761 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 23 15:39:29 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Nov 23 15:39:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:29 np0005532761 systemd[1]: libpod-9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82.scope: Deactivated successfully.
Nov 23 15:39:29 np0005532761 podman[76865]: 2025-11-23 20:39:29.73947613 +0000 UTC m=+0.485780194 container died 9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82 (image=quay.io/ceph/ceph:v19, name=hardcore_buck, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8dafe44b1f113995567d9d7c56bd0226b69e96e1a79ac1d59fac6931606ecbba-merged.mount: Deactivated successfully.
Nov 23 15:39:29 np0005532761 podman[76865]: 2025-11-23 20:39:29.780049713 +0000 UTC m=+0.526353777 container remove 9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82 (image=quay.io/ceph/ceph:v19, name=hardcore_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 23 15:39:29 np0005532761 systemd[1]: libpod-conmon-9a72db6ccd164054768994b5e7f53f15a8e521725106c39a4ab1547fdf70dd82.scope: Deactivated successfully.
Nov 23 15:39:29 np0005532761 podman[76988]: 2025-11-23 20:39:29.839065399 +0000 UTC m=+0.037922600 container create 9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137 (image=quay.io/ceph/ceph:v19, name=bold_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:39:29 np0005532761 systemd[1]: Started libpod-conmon-9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137.scope.
Nov 23 15:39:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398636ff14148357f0ccc95582b3a1a0107aa28e9ea55a8570c7340006e08569/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398636ff14148357f0ccc95582b3a1a0107aa28e9ea55a8570c7340006e08569/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398636ff14148357f0ccc95582b3a1a0107aa28e9ea55a8570c7340006e08569/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:29 np0005532761 podman[76988]: 2025-11-23 20:39:29.893352947 +0000 UTC m=+0.092210158 container init 9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137 (image=quay.io/ceph/ceph:v19, name=bold_germain, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:39:29 np0005532761 podman[76988]: 2025-11-23 20:39:29.900115713 +0000 UTC m=+0.098972914 container start 9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137 (image=quay.io/ceph/ceph:v19, name=bold_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:29 np0005532761 podman[76988]: 2025-11-23 20:39:29.903911676 +0000 UTC m=+0.102768877 container attach 9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137 (image=quay.io/ceph/ceph:v19, name=bold_germain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:39:29 np0005532761 podman[76988]: 2025-11-23 20:39:29.820892584 +0000 UTC m=+0.019749805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: Saving service crash spec with placement *
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1636127195' entity='client.admin' 
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:30 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Added label _admin to host compute-0
Nov 23 15:39:30 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 23 15:39:30 np0005532761 bold_germain[77010]: Added label _admin to host compute-0
Nov 23 15:39:30 np0005532761 systemd[1]: libpod-9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137.scope: Deactivated successfully.
Nov 23 15:39:30 np0005532761 podman[76988]: 2025-11-23 20:39:30.272623063 +0000 UTC m=+0.471480294 container died 9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137 (image=quay.io/ceph/ceph:v19, name=bold_germain, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-398636ff14148357f0ccc95582b3a1a0107aa28e9ea55a8570c7340006e08569-merged.mount: Deactivated successfully.
Nov 23 15:39:30 np0005532761 podman[76988]: 2025-11-23 20:39:30.319273625 +0000 UTC m=+0.518130826 container remove 9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137 (image=quay.io/ceph/ceph:v19, name=bold_germain, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:30 np0005532761 systemd[1]: libpod-conmon-9cbe7c42ec323837267bb4d7570bf7fa0dc4f41da258f54944ef3d20747c2137.scope: Deactivated successfully.
Nov 23 15:39:30 np0005532761 podman[77141]: 2025-11-23 20:39:30.377604853 +0000 UTC m=+0.038481263 container create 19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779 (image=quay.io/ceph/ceph:v19, name=stupefied_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:30 np0005532761 systemd[1]: Started libpod-conmon-19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779.scope.
Nov 23 15:39:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84583c61d4622e4f9a85293c33bdec80b6a7c74f3bdb5feb4a582edd215b9612/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84583c61d4622e4f9a85293c33bdec80b6a7c74f3bdb5feb4a582edd215b9612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84583c61d4622e4f9a85293c33bdec80b6a7c74f3bdb5feb4a582edd215b9612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:30 np0005532761 podman[77141]: 2025-11-23 20:39:30.45508428 +0000 UTC m=+0.115960710 container init 19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779 (image=quay.io/ceph/ceph:v19, name=stupefied_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 15:39:30 np0005532761 podman[77141]: 2025-11-23 20:39:30.361241563 +0000 UTC m=+0.022117993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:30 np0005532761 podman[77141]: 2025-11-23 20:39:30.461055196 +0000 UTC m=+0.121931606 container start 19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779 (image=quay.io/ceph/ceph:v19, name=stupefied_lamport, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:30 np0005532761 podman[77141]: 2025-11-23 20:39:30.465113286 +0000 UTC m=+0.125989696 container attach 19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779 (image=quay.io/ceph/ceph:v19, name=stupefied_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:30 np0005532761 podman[77253]: 2025-11-23 20:39:30.84273541 +0000 UTC m=+0.076506163 container create 6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gagarin, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:39:30 np0005532761 podman[77253]: 2025-11-23 20:39:30.793166427 +0000 UTC m=+0.026937200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:39:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Nov 23 15:39:30 np0005532761 systemd[1]: Started libpod-conmon-6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794.scope.
Nov 23 15:39:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3447607925' entity='client.admin' 
Nov 23 15:39:31 np0005532761 stupefied_lamport[77188]: set mgr/dashboard/cluster/status
Nov 23 15:39:31 np0005532761 systemd[1]: libpod-19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779.scope: Deactivated successfully.
Nov 23 15:39:31 np0005532761 podman[77253]: 2025-11-23 20:39:31.101285871 +0000 UTC m=+0.335056634 container init 6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gagarin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 15:39:31 np0005532761 podman[77141]: 2025-11-23 20:39:31.10368081 +0000 UTC m=+0.764557230 container died 19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779 (image=quay.io/ceph/ceph:v19, name=stupefied_lamport, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:31 np0005532761 podman[77253]: 2025-11-23 20:39:31.106077938 +0000 UTC m=+0.339848691 container start 6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:31 np0005532761 gifted_gagarin[77271]: 167 167
Nov 23 15:39:31 np0005532761 systemd[1]: libpod-6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794.scope: Deactivated successfully.
Nov 23 15:39:31 np0005532761 conmon[77271]: conmon 6fc765467d22b1e5ecb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794.scope/container/memory.events
Nov 23 15:39:31 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:31 np0005532761 ceph-mon[74569]: Added label _admin to host compute-0
Nov 23 15:39:31 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:31 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3447607925' entity='client.admin' 
Nov 23 15:39:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-84583c61d4622e4f9a85293c33bdec80b6a7c74f3bdb5feb4a582edd215b9612-merged.mount: Deactivated successfully.
Nov 23 15:39:31 np0005532761 podman[77253]: 2025-11-23 20:39:31.290648937 +0000 UTC m=+0.524419680 container attach 6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gagarin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:31 np0005532761 podman[77253]: 2025-11-23 20:39:31.291045167 +0000 UTC m=+0.524815920 container died 6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:31 np0005532761 podman[77141]: 2025-11-23 20:39:31.318508489 +0000 UTC m=+0.979384899 container remove 19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779 (image=quay.io/ceph/ceph:v19, name=stupefied_lamport, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:31 np0005532761 systemd[1]: libpod-conmon-19f77e43c7aff0ecb5448346b3a16c31bc609b9bf877d98bdecbcfefc0044779.scope: Deactivated successfully.
Nov 23 15:39:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-589bbd262bdda1d2ad31650a8db9c46a2dc5f8186e7c31b2e5df8bbee92ce248-merged.mount: Deactivated successfully.
Nov 23 15:39:31 np0005532761 podman[77253]: 2025-11-23 20:39:31.350090792 +0000 UTC m=+0.583861545 container remove 6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gagarin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:39:31 np0005532761 systemd[1]: libpod-conmon-6fc765467d22b1e5ecb5b63034df2c6e097e4b3a14b64fb240800702e22ea794.scope: Deactivated successfully.
Nov 23 15:39:31 np0005532761 podman[77310]: 2025-11-23 20:39:31.480857183 +0000 UTC m=+0.033445490 container create d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mahavira, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:31 np0005532761 systemd[1]: Started libpod-conmon-d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585.scope.
Nov 23 15:39:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b2aa891438ffa42b35fdc2004bd49e07ded287c4413e1a38a8aec5fa7e0a13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b2aa891438ffa42b35fdc2004bd49e07ded287c4413e1a38a8aec5fa7e0a13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b2aa891438ffa42b35fdc2004bd49e07ded287c4413e1a38a8aec5fa7e0a13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b2aa891438ffa42b35fdc2004bd49e07ded287c4413e1a38a8aec5fa7e0a13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:31 np0005532761 podman[77310]: 2025-11-23 20:39:31.548046458 +0000 UTC m=+0.100634784 container init d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:31 np0005532761 podman[77310]: 2025-11-23 20:39:31.555358197 +0000 UTC m=+0.107946503 container start d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:31 np0005532761 podman[77310]: 2025-11-23 20:39:31.55870181 +0000 UTC m=+0.111290116 container attach d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:39:31 np0005532761 podman[77310]: 2025-11-23 20:39:31.4664183 +0000 UTC m=+0.019006636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:39:31 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:31 np0005532761 python3[77357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:31 np0005532761 podman[77363]: 2025-11-23 20:39:31.865674935 +0000 UTC m=+0.048047497 container create 04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314 (image=quay.io/ceph/ceph:v19, name=reverent_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:31 np0005532761 systemd[1]: Started libpod-conmon-04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314.scope.
Nov 23 15:39:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78371411c904ca4b2beb38cbe2f11db3d6078cdafcbffc166050b507f8b6fe4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78371411c904ca4b2beb38cbe2f11db3d6078cdafcbffc166050b507f8b6fe4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:31 np0005532761 podman[77363]: 2025-11-23 20:39:31.845299156 +0000 UTC m=+0.027671748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:31 np0005532761 podman[77363]: 2025-11-23 20:39:31.944838174 +0000 UTC m=+0.127210776 container init 04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314 (image=quay.io/ceph/ceph:v19, name=reverent_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 15:39:31 np0005532761 podman[77363]: 2025-11-23 20:39:31.95082089 +0000 UTC m=+0.133193462 container start 04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314 (image=quay.io/ceph/ceph:v19, name=reverent_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 15:39:31 np0005532761 podman[77363]: 2025-11-23 20:39:31.954614303 +0000 UTC m=+0.136986905 container attach 04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314 (image=quay.io/ceph/ceph:v19, name=reverent_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4050187146' entity='client.admin' 
Nov 23 15:39:32 np0005532761 systemd[1]: libpod-04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314.scope: Deactivated successfully.
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]: [
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:    {
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "available": false,
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "being_replaced": false,
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "ceph_device_lvm": false,
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "lsm_data": {},
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "lvs": [],
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "path": "/dev/sr0",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "rejected_reasons": [
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "Has a FileSystem",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "Insufficient space (<5GB)"
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        ],
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        "sys_api": {
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "actuators": null,
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "device_nodes": [
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:                "sr0"
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            ],
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "devname": "sr0",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "human_readable_size": "482.00 KB",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "id_bus": "ata",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "model": "QEMU DVD-ROM",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "nr_requests": "2",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "parent": "/dev/sr0",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "partitions": {},
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "path": "/dev/sr0",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "removable": "1",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "rev": "2.5+",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "ro": "0",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "rotational": "1",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "sas_address": "",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "sas_device_handle": "",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "scheduler_mode": "mq-deadline",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "sectors": 0,
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "sectorsize": "2048",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "size": 493568.0,
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "support_discard": "2048",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "type": "disk",
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:            "vendor": "QEMU"
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:        }
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]:    }
Nov 23 15:39:32 np0005532761 frosty_mahavira[77327]: ]
Nov 23 15:39:32 np0005532761 systemd[1]: libpod-d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585.scope: Deactivated successfully.
Nov 23 15:39:32 np0005532761 podman[77310]: 2025-11-23 20:39:32.333383616 +0000 UTC m=+0.885971922 container died d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mahavira, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:39:32 np0005532761 podman[78542]: 2025-11-23 20:39:32.348424844 +0000 UTC m=+0.028705044 container died 04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314 (image=quay.io/ceph/ceph:v19, name=reverent_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Nov 23 15:39:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c8b2aa891438ffa42b35fdc2004bd49e07ded287c4413e1a38a8aec5fa7e0a13-merged.mount: Deactivated successfully.
Nov 23 15:39:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-78371411c904ca4b2beb38cbe2f11db3d6078cdafcbffc166050b507f8b6fe4c-merged.mount: Deactivated successfully.
Nov 23 15:39:32 np0005532761 podman[77310]: 2025-11-23 20:39:32.386317182 +0000 UTC m=+0.938905478 container remove d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:39:32 np0005532761 systemd[1]: libpod-conmon-d24b8f993bd2b4bca33371b30380e282dfe7c7cfc4609923920227844e6f9585.scope: Deactivated successfully.
Nov 23 15:39:32 np0005532761 podman[78542]: 2025-11-23 20:39:32.412465142 +0000 UTC m=+0.092745372 container remove 04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314 (image=quay.io/ceph/ceph:v19, name=reverent_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:32 np0005532761 systemd[1]: libpod-conmon-04c618abd9c3943cfffd7db80789caa2ef34a7790adb07a5ca353101957e2314.scope: Deactivated successfully.
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:39:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:32 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:39:32 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:39:32 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:39:32 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4050187146' entity='client.admin' 
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:33 np0005532761 ansible-async_wrapper.py[79042]: Invoked with j464601902930 30 /home/zuul/.ansible/tmp/ansible-tmp-1763930372.7607977-37217-85133091468045/AnsiballZ_command.py _
Nov 23 15:39:33 np0005532761 ansible-async_wrapper.py[79116]: Starting module and watcher
Nov 23 15:39:33 np0005532761 ansible-async_wrapper.py[79116]: Start watching 79118 (30)
Nov 23 15:39:33 np0005532761 ansible-async_wrapper.py[79118]: Start module (79118)
Nov 23 15:39:33 np0005532761 ansible-async_wrapper.py[79042]: Return async_wrapper task started.
Nov 23 15:39:33 np0005532761 python3[79119]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:33 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:39:33 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:39:33 np0005532761 podman[79173]: 2025-11-23 20:39:33.527981052 +0000 UTC m=+0.042301786 container create be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c (image=quay.io/ceph/ceph:v19, name=bold_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 15:39:33 np0005532761 systemd[1]: Started libpod-conmon-be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c.scope.
Nov 23 15:39:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:33 np0005532761 podman[79173]: 2025-11-23 20:39:33.510902185 +0000 UTC m=+0.025222949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013ed01eab7296e9f58cc36fd000dbc44bcb8d61cdfd9009c9de6036902f34d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013ed01eab7296e9f58cc36fd000dbc44bcb8d61cdfd9009c9de6036902f34d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:33 np0005532761 podman[79173]: 2025-11-23 20:39:33.621058012 +0000 UTC m=+0.135378776 container init be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c (image=quay.io/ceph/ceph:v19, name=bold_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:39:33 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:33 np0005532761 podman[79173]: 2025-11-23 20:39:33.628595356 +0000 UTC m=+0.142916100 container start be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c (image=quay.io/ceph/ceph:v19, name=bold_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:33 np0005532761 podman[79173]: 2025-11-23 20:39:33.633903626 +0000 UTC m=+0.148224400 container attach be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c (image=quay.io/ceph/ceph:v19, name=bold_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:39:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:33 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 15:39:33 np0005532761 bold_albattani[79236]: 
Nov 23 15:39:33 np0005532761 bold_albattani[79236]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 23 15:39:33 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:39:33 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:39:33 np0005532761 systemd[1]: libpod-be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c.scope: Deactivated successfully.
Nov 23 15:39:34 np0005532761 podman[79173]: 2025-11-23 20:39:33.999377434 +0000 UTC m=+0.513698178 container died be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c (image=quay.io/ceph/ceph:v19, name=bold_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-013ed01eab7296e9f58cc36fd000dbc44bcb8d61cdfd9009c9de6036902f34d3-merged.mount: Deactivated successfully.
Nov 23 15:39:34 np0005532761 podman[79173]: 2025-11-23 20:39:34.085360098 +0000 UTC m=+0.599680842 container remove be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c (image=quay.io/ceph/ceph:v19, name=bold_albattani, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 15:39:34 np0005532761 ansible-async_wrapper.py[79118]: Module complete (79118)
Nov 23 15:39:34 np0005532761 systemd[1]: libpod-conmon-be88ba171b3801e59da20b5f8b1c31c0fbba7821188a8e61e574969ab93f212c.scope: Deactivated successfully.
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:34 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 5a187ba5-34a2-4585-af18-36c229466de0 (Updating crash deployment (+1 -> 1))
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:39:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:39:34 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 23 15:39:34 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 23 15:39:34 np0005532761 python3[79767]: ansible-ansible.legacy.async_status Invoked with jid=j464601902930.79042 mode=status _async_dir=/root/.ansible_async
Nov 23 15:39:35 np0005532761 podman[79859]: 2025-11-23 20:39:35.035798978 +0000 UTC m=+0.039316144 container create 8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:39:35 np0005532761 python3[79834]: ansible-ansible.legacy.async_status Invoked with jid=j464601902930.79042 mode=cleanup _async_dir=/root/.ansible_async
Nov 23 15:39:35 np0005532761 systemd[1]: Started libpod-conmon-8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897.scope.
Nov 23 15:39:35 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:35 np0005532761 podman[79859]: 2025-11-23 20:39:35.020510233 +0000 UTC m=+0.024027419 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:39:35 np0005532761 podman[79859]: 2025-11-23 20:39:35.116855362 +0000 UTC m=+0.120372548 container init 8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:39:35 np0005532761 podman[79859]: 2025-11-23 20:39:35.122411668 +0000 UTC m=+0.125928834 container start 8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_goldwasser, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:35 np0005532761 gallant_goldwasser[79875]: 167 167
Nov 23 15:39:35 np0005532761 systemd[1]: libpod-8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897.scope: Deactivated successfully.
Nov 23 15:39:35 np0005532761 podman[79859]: 2025-11-23 20:39:35.129664356 +0000 UTC m=+0.133181532 container attach 8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 15:39:35 np0005532761 podman[79859]: 2025-11-23 20:39:35.129984803 +0000 UTC m=+0.133501969 container died 8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:35 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ac11cb4bad109034ab9236c798321520163d1effde9fa618aef7b9a6cef6e8e5-merged.mount: Deactivated successfully.
Nov 23 15:39:35 np0005532761 podman[79859]: 2025-11-23 20:39:35.19394407 +0000 UTC m=+0.197461236 container remove 8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:39:35 np0005532761 systemd[1]: libpod-conmon-8b7adf452460039a7ecc8c4e9342379a5a412dcb62290d79f5e635aa2a668897.scope: Deactivated successfully.
Nov 23 15:39:35 np0005532761 systemd[1]: Reloading.
Nov 23 15:39:35 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:39:35 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:39:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:39:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:39:35 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:39:35 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:39:35 np0005532761 systemd[1]: Reloading.
Nov 23 15:39:35 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:39:35 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:39:35 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:39:35 np0005532761 python3[79955]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:39:35 np0005532761 systemd[1]: Starting Ceph crash.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:39:36 np0005532761 podman[80045]: 2025-11-23 20:39:36.035967574 +0000 UTC m=+0.050123647 container create 54fdcc7d7f6dcc3b63483eaa034b23319c8b764b9f6d3cb3311b267f4b00d193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6ad6b5a9bea91469cbcba7ba9204ec304d432cd3a85c7c2715ae38cd7a9a32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6ad6b5a9bea91469cbcba7ba9204ec304d432cd3a85c7c2715ae38cd7a9a32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6ad6b5a9bea91469cbcba7ba9204ec304d432cd3a85c7c2715ae38cd7a9a32/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6ad6b5a9bea91469cbcba7ba9204ec304d432cd3a85c7c2715ae38cd7a9a32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:36 np0005532761 podman[80045]: 2025-11-23 20:39:36.006338569 +0000 UTC m=+0.020494642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:39:36 np0005532761 podman[80045]: 2025-11-23 20:39:36.182568554 +0000 UTC m=+0.196724627 container init 54fdcc7d7f6dcc3b63483eaa034b23319c8b764b9f6d3cb3311b267f4b00d193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:36 np0005532761 podman[80045]: 2025-11-23 20:39:36.189151005 +0000 UTC m=+0.203307058 container start 54fdcc7d7f6dcc3b63483eaa034b23319c8b764b9f6d3cb3311b267f4b00d193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:39:36 np0005532761 bash[80045]: 54fdcc7d7f6dcc3b63483eaa034b23319c8b764b9f6d3cb3311b267f4b00d193
Nov 23 15:39:36 np0005532761 systemd[1]: Started Ceph crash.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 23 15:39:36 np0005532761 python3[80090]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: Deploying daemon crash.compute-0 on compute-0
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:36 np0005532761 podman[80095]: 2025-11-23 20:39:36.344742814 +0000 UTC m=+0.063040804 container create 76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef (image=quay.io/ceph/ceph:v19, name=heuristic_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: 2025-11-23T20:39:36.351+0000 7f458ea85640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: 2025-11-23T20:39:36.351+0000 7f458ea85640 -1 AuthRegistry(0x7f4588069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: 2025-11-23T20:39:36.352+0000 7f458ea85640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 15:39:36 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 5a187ba5-34a2-4585-af18-36c229466de0 (Updating crash deployment (+1 -> 1))
Nov 23 15:39:36 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 5a187ba5-34a2-4585-af18-36c229466de0 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: 2025-11-23T20:39:36.352+0000 7f458ea85640 -1 AuthRegistry(0x7f458ea83ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: 2025-11-23T20:39:36.353+0000 7f4587fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: 2025-11-23T20:39:36.353+0000 7f458ea85640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 23 15:39:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 15:39:36 np0005532761 systemd[1]: Started libpod-conmon-76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef.scope.
Nov 23 15:39:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:36 np0005532761 podman[80095]: 2025-11-23 20:39:36.308684601 +0000 UTC m=+0.026982611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:36 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895e12e538797a0a8532bd2613eb02057108ac9b9e6b768cde4ebb036cb2e80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895e12e538797a0a8532bd2613eb02057108ac9b9e6b768cde4ebb036cb2e80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895e12e538797a0a8532bd2613eb02057108ac9b9e6b768cde4ebb036cb2e80/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:36 np0005532761 podman[80095]: 2025-11-23 20:39:36.490644106 +0000 UTC m=+0.208942106 container init 76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef (image=quay.io/ceph/ceph:v19, name=heuristic_volhard, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:36 np0005532761 podman[80095]: 2025-11-23 20:39:36.497513714 +0000 UTC m=+0.215811704 container start 76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef (image=quay.io/ceph/ceph:v19, name=heuristic_volhard, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:36 np0005532761 podman[80095]: 2025-11-23 20:39:36.502704992 +0000 UTC m=+0.221002982 container attach 76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef (image=quay.io/ceph/ceph:v19, name=heuristic_volhard, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:39:36 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 15:39:36 np0005532761 heuristic_volhard[80119]: 
Nov 23 15:39:36 np0005532761 heuristic_volhard[80119]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 23 15:39:36 np0005532761 systemd[1]: libpod-76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef.scope: Deactivated successfully.
Nov 23 15:39:36 np0005532761 podman[80095]: 2025-11-23 20:39:36.860142383 +0000 UTC m=+0.578440383 container died 76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef (image=quay.io/ceph/ceph:v19, name=heuristic_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:39:36 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a895e12e538797a0a8532bd2613eb02057108ac9b9e6b768cde4ebb036cb2e80-merged.mount: Deactivated successfully.
Nov 23 15:39:36 np0005532761 podman[80095]: 2025-11-23 20:39:36.92373716 +0000 UTC m=+0.642035150 container remove 76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef (image=quay.io/ceph/ceph:v19, name=heuristic_volhard, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 15:39:36 np0005532761 systemd[1]: libpod-conmon-76ce03b6e0361b2a407175d502ae6008464c427857e94268a68d2235ea12a2ef.scope: Deactivated successfully.
Nov 23 15:39:37 np0005532761 podman[80301]: 2025-11-23 20:39:37.150413339 +0000 UTC m=+0.075080069 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:37 np0005532761 podman[80301]: 2025-11-23 20:39:37.28033539 +0000 UTC m=+0.205002100 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 python3[80346]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:37 np0005532761 podman[80378]: 2025-11-23 20:39:37.455495998 +0000 UTC m=+0.049333098 container create 48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47 (image=quay.io/ceph/ceph:v19, name=romantic_lumiere, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:37 np0005532761 systemd[1]: Started libpod-conmon-48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47.scope.
Nov 23 15:39:37 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3342359596313df6a6b19d77223d813e12df83c9e3b2fc5a6db0871aab9e36/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3342359596313df6a6b19d77223d813e12df83c9e3b2fc5a6db0871aab9e36/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3342359596313df6a6b19d77223d813e12df83c9e3b2fc5a6db0871aab9e36/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:37 np0005532761 podman[80378]: 2025-11-23 20:39:37.430351443 +0000 UTC m=+0.024188563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:37 np0005532761 podman[80378]: 2025-11-23 20:39:37.538756647 +0000 UTC m=+0.132593727 container init 48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47 (image=quay.io/ceph/ceph:v19, name=romantic_lumiere, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:37 np0005532761 podman[80378]: 2025-11-23 20:39:37.546834675 +0000 UTC m=+0.140671785 container start 48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47 (image=quay.io/ceph/ceph:v19, name=romantic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 podman[80378]: 2025-11-23 20:39:37.55195485 +0000 UTC m=+0.145791930 container attach 48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47 (image=quay.io/ceph/ceph:v19, name=romantic_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mgr[74869]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 23 15:39:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 23 15:39:37 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:39:37 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 15:39:37 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 15:39:37 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 1 completed events
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Nov 23 15:39:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2257139554' entity='client.admin' 
Nov 23 15:39:37 np0005532761 systemd[1]: libpod-48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47.scope: Deactivated successfully.
Nov 23 15:39:37 np0005532761 podman[80378]: 2025-11-23 20:39:37.927128375 +0000 UTC m=+0.520965445 container died 48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47 (image=quay.io/ceph/ceph:v19, name=romantic_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:39:37 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9d3342359596313df6a6b19d77223d813e12df83c9e3b2fc5a6db0871aab9e36-merged.mount: Deactivated successfully.
Nov 23 15:39:37 np0005532761 podman[80378]: 2025-11-23 20:39:37.96899293 +0000 UTC m=+0.562830010 container remove 48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47 (image=quay.io/ceph/ceph:v19, name=romantic_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:39:37 np0005532761 systemd[1]: libpod-conmon-48c0bb06aaf1d2d571618860dc0e6993601cd6ee95a312f2d76001e4cf15ed47.scope: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80546]: 2025-11-23 20:39:38.136556223 +0000 UTC m=+0.045774162 container create 4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d (image=quay.io/ceph/ceph:v19, name=great_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:38 np0005532761 systemd[1]: Started libpod-conmon-4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d.scope.
Nov 23 15:39:38 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:38 np0005532761 podman[80546]: 2025-11-23 20:39:38.114915053 +0000 UTC m=+0.024133022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:38 np0005532761 podman[80546]: 2025-11-23 20:39:38.213901656 +0000 UTC m=+0.123119595 container init 4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d (image=quay.io/ceph/ceph:v19, name=great_wright, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 15:39:38 np0005532761 podman[80546]: 2025-11-23 20:39:38.220484028 +0000 UTC m=+0.129701997 container start 4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d (image=quay.io/ceph/ceph:v19, name=great_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:39:38 np0005532761 podman[80546]: 2025-11-23 20:39:38.226674019 +0000 UTC m=+0.135891978 container attach 4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d (image=quay.io/ceph/ceph:v19, name=great_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:39:38 np0005532761 great_wright[80581]: 167 167
Nov 23 15:39:38 np0005532761 systemd[1]: libpod-4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d.scope: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80546]: 2025-11-23 20:39:38.229895647 +0000 UTC m=+0.139113586 container died 4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d (image=quay.io/ceph/ceph:v19, name=great_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:39:38 np0005532761 systemd[1]: var-lib-containers-storage-overlay-832711d846de293bfb2a7f319c82c1e3894bb642fb8b724c1b1686a8b7a3b898-merged.mount: Deactivated successfully.
Nov 23 15:39:38 np0005532761 python3[80577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:38 np0005532761 podman[80546]: 2025-11-23 20:39:38.268513753 +0000 UTC m=+0.177731692 container remove 4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d (image=quay.io/ceph/ceph:v19, name=great_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:39:38 np0005532761 systemd[1]: libpod-conmon-4a9300b87ffe79a33d4d0d2eb703f2bae7d540e10e31ad68da4fbcea7304422d.scope: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80598]: 2025-11-23 20:39:38.320500526 +0000 UTC m=+0.035222933 container create adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169 (image=quay.io/ceph/ceph:v19, name=epic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:38 np0005532761 ansible-async_wrapper.py[79116]: Done in kid B.
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.oyehye (unknown last config time)...
Nov 23 15:39:38 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.oyehye (unknown last config time)...
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2257139554' entity='client.admin' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:38 np0005532761 systemd[1]: Started libpod-conmon-adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169.scope.
Nov 23 15:39:38 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 15:39:38 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 15:39:38 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f3194035436663947ea1ade32d5d814e3b108b9013e62a293ac3bec27ddc76/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f3194035436663947ea1ade32d5d814e3b108b9013e62a293ac3bec27ddc76/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f3194035436663947ea1ade32d5d814e3b108b9013e62a293ac3bec27ddc76/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:38 np0005532761 podman[80598]: 2025-11-23 20:39:38.400126255 +0000 UTC m=+0.114848682 container init adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169 (image=quay.io/ceph/ceph:v19, name=epic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:38 np0005532761 podman[80598]: 2025-11-23 20:39:38.305682034 +0000 UTC m=+0.020404451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:38 np0005532761 podman[80598]: 2025-11-23 20:39:38.405018956 +0000 UTC m=+0.119741353 container start adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169 (image=quay.io/ceph/ceph:v19, name=epic_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Nov 23 15:39:38 np0005532761 podman[80598]: 2025-11-23 20:39:38.408706445 +0000 UTC m=+0.123428842 container attach adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169 (image=quay.io/ceph/ceph:v19, name=epic_einstein, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Nov 23 15:39:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3115008113' entity='client.admin' 
Nov 23 15:39:38 np0005532761 systemd[1]: libpod-adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169.scope: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80598]: 2025-11-23 20:39:38.783646075 +0000 UTC m=+0.498368482 container died adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169 (image=quay.io/ceph/ceph:v19, name=epic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 15:39:38 np0005532761 podman[80702]: 2025-11-23 20:39:38.810227886 +0000 UTC m=+0.057007847 container create 38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6 (image=quay.io/ceph/ceph:v19, name=fervent_cray, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:39:38 np0005532761 systemd[1]: var-lib-containers-storage-overlay-37f3194035436663947ea1ade32d5d814e3b108b9013e62a293ac3bec27ddc76-merged.mount: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80598]: 2025-11-23 20:39:38.846778741 +0000 UTC m=+0.561501138 container remove adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169 (image=quay.io/ceph/ceph:v19, name=epic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:39:38 np0005532761 systemd[1]: Started libpod-conmon-38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6.scope.
Nov 23 15:39:38 np0005532761 systemd[1]: libpod-conmon-adec0441775fc8453eee18d7573df9a49566398a6d8cce9f34a9fcc8cbf59169.scope: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80702]: 2025-11-23 20:39:38.780392206 +0000 UTC m=+0.027172197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:38 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:38 np0005532761 podman[80702]: 2025-11-23 20:39:38.902746031 +0000 UTC m=+0.149526012 container init 38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6 (image=quay.io/ceph/ceph:v19, name=fervent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:38 np0005532761 podman[80702]: 2025-11-23 20:39:38.908628625 +0000 UTC m=+0.155408586 container start 38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6 (image=quay.io/ceph/ceph:v19, name=fervent_cray, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:39:38 np0005532761 fervent_cray[80735]: 167 167
Nov 23 15:39:38 np0005532761 podman[80702]: 2025-11-23 20:39:38.913417913 +0000 UTC m=+0.160197904 container attach 38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6 (image=quay.io/ceph/ceph:v19, name=fervent_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Nov 23 15:39:38 np0005532761 systemd[1]: libpod-38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6.scope: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80702]: 2025-11-23 20:39:38.913873523 +0000 UTC m=+0.160653474 container died 38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6 (image=quay.io/ceph/ceph:v19, name=fervent_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:39:38 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a06538a028c4f4f71fb9c1e9701aa96a8ad0471e7ad2f1ea260741725f35b9aa-merged.mount: Deactivated successfully.
Nov 23 15:39:38 np0005532761 podman[80702]: 2025-11-23 20:39:38.959296495 +0000 UTC m=+0.206076456 container remove 38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6 (image=quay.io/ceph/ceph:v19, name=fervent_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:38 np0005532761 systemd[1]: libpod-conmon-38bf3ea7017c3e467a682d4109d312f6afda8f12351e087f0838cedcd09360a6.scope: Deactivated successfully.
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:39:39 np0005532761 python3[80777]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:39 np0005532761 podman[80778]: 2025-11-23 20:39:39.223164016 +0000 UTC m=+0.018729190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:39 np0005532761 podman[80778]: 2025-11-23 20:39:39.78823041 +0000 UTC m=+0.583795604 container create 885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900 (image=quay.io/ceph/ceph:v19, name=festive_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: Reconfiguring mgr.compute-0.oyehye (unknown last config time)...
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3115008113' entity='client.admin' 
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:40 np0005532761 systemd[1]: Started libpod-conmon-885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900.scope.
Nov 23 15:39:40 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3685ad71b08f33748bfb7338d40d4843ef0dfa06d6aff4ac4b0d1f4fa1b1850c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3685ad71b08f33748bfb7338d40d4843ef0dfa06d6aff4ac4b0d1f4fa1b1850c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3685ad71b08f33748bfb7338d40d4843ef0dfa06d6aff4ac4b0d1f4fa1b1850c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:40 np0005532761 podman[80778]: 2025-11-23 20:39:40.154690632 +0000 UTC m=+0.950255806 container init 885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900 (image=quay.io/ceph/ceph:v19, name=festive_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 15:39:40 np0005532761 podman[80778]: 2025-11-23 20:39:40.160110674 +0000 UTC m=+0.955675828 container start 885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900 (image=quay.io/ceph/ceph:v19, name=festive_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:39:40 np0005532761 podman[80778]: 2025-11-23 20:39:40.165127158 +0000 UTC m=+0.960692312 container attach 885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900 (image=quay.io/ceph/ceph:v19, name=festive_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Nov 23 15:39:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3735469955' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 23 15:39:41 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:41 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3735469955' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 23 15:39:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 23 15:39:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:39:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3735469955' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 23 15:39:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 23 15:39:41 np0005532761 festive_hawking[80818]: set require_min_compat_client to mimic
Nov 23 15:39:41 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 23 15:39:41 np0005532761 systemd[1]: libpod-885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900.scope: Deactivated successfully.
Nov 23 15:39:41 np0005532761 podman[80778]: 2025-11-23 20:39:41.206575694 +0000 UTC m=+2.002140848 container died 885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900 (image=quay.io/ceph/ceph:v19, name=festive_hawking, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 23 15:39:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3685ad71b08f33748bfb7338d40d4843ef0dfa06d6aff4ac4b0d1f4fa1b1850c-merged.mount: Deactivated successfully.
Nov 23 15:39:41 np0005532761 podman[80778]: 2025-11-23 20:39:41.244129123 +0000 UTC m=+2.039694277 container remove 885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900 (image=quay.io/ceph/ceph:v19, name=festive_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 15:39:41 np0005532761 systemd[1]: libpod-conmon-885185534bb72faa44641b109948319cff1d5b031b43427063b97d73a85a2900.scope: Deactivated successfully.
Nov 23 15:39:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:41 np0005532761 python3[80882]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:41 np0005532761 podman[80883]: 2025-11-23 20:39:41.91057189 +0000 UTC m=+0.045211618 container create cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37 (image=quay.io/ceph/ceph:v19, name=agitated_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:41 np0005532761 systemd[1]: Started libpod-conmon-cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37.scope.
Nov 23 15:39:41 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417e50507440dbf997acdcbdf66263fdd3dea9ab77d579c3c66afdbabe2f49e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417e50507440dbf997acdcbdf66263fdd3dea9ab77d579c3c66afdbabe2f49e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d417e50507440dbf997acdcbdf66263fdd3dea9ab77d579c3c66afdbabe2f49e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:41 np0005532761 podman[80883]: 2025-11-23 20:39:41.970920817 +0000 UTC m=+0.105560555 container init cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37 (image=quay.io/ceph/ceph:v19, name=agitated_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 15:39:41 np0005532761 podman[80883]: 2025-11-23 20:39:41.976877974 +0000 UTC m=+0.111517702 container start cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37 (image=quay.io/ceph/ceph:v19, name=agitated_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:39:41 np0005532761 podman[80883]: 2025-11-23 20:39:41.979883537 +0000 UTC m=+0.114523265 container attach cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37 (image=quay.io/ceph/ceph:v19, name=agitated_kepler, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:39:41 np0005532761 podman[80883]: 2025-11-23 20:39:41.889874243 +0000 UTC m=+0.024513981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3735469955' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 23 15:39:42 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:42 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Added host compute-0
Nov 23 15:39:42 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:39:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:43 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:43 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:43 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:43 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:43 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:39:43 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:43 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 23 15:39:43 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 23 15:39:44 np0005532761 ceph-mon[74569]: Added host compute-0
Nov 23 15:39:45 np0005532761 ceph-mon[74569]: Deploying cephadm binary to compute-1
Nov 23 15:39:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:39:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Added host compute-1
Nov 23 15:39:47 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: Added host compute-1
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:39:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:49 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 23 15:39:49 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 23 15:39:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:49 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:49 np0005532761 ceph-mon[74569]: Deploying cephadm binary to compute-2
Nov 23 15:39:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:39:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:50 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Added host compute-2
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:53 np0005532761 agitated_kepler[80898]: Added host 'compute-0' with addr '192.168.122.100'
Nov 23 15:39:53 np0005532761 agitated_kepler[80898]: Added host 'compute-1' with addr '192.168.122.101'
Nov 23 15:39:53 np0005532761 agitated_kepler[80898]: Added host 'compute-2' with addr '192.168.122.102'
Nov 23 15:39:53 np0005532761 agitated_kepler[80898]: Scheduled mon update...
Nov 23 15:39:53 np0005532761 agitated_kepler[80898]: Scheduled mgr update...
Nov 23 15:39:53 np0005532761 agitated_kepler[80898]: Scheduled osd.default_drive_group update...
Nov 23 15:39:53 np0005532761 systemd[1]: libpod-cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37.scope: Deactivated successfully.
Nov 23 15:39:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:53 np0005532761 podman[80883]: 2025-11-23 20:39:53.629879388 +0000 UTC m=+11.764519156 container died cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37 (image=quay.io/ceph/ceph:v19, name=agitated_kepler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 15:39:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d417e50507440dbf997acdcbdf66263fdd3dea9ab77d579c3c66afdbabe2f49e-merged.mount: Deactivated successfully.
Nov 23 15:39:53 np0005532761 podman[80883]: 2025-11-23 20:39:53.709923198 +0000 UTC m=+11.844562926 container remove cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37 (image=quay.io/ceph/ceph:v19, name=agitated_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:39:53 np0005532761 systemd[1]: libpod-conmon-cbe77c92548e946e531a35bc4242a75458e3347d8c76f092d95404854fdf9f37.scope: Deactivated successfully.
Nov 23 15:39:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:54 np0005532761 python3[81062]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:39:54 np0005532761 podman[81064]: 2025-11-23 20:39:54.156338868 +0000 UTC m=+0.048449537 container create 0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf (image=quay.io/ceph/ceph:v19, name=trusting_lalande, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:54 np0005532761 systemd[1]: Started libpod-conmon-0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf.scope.
Nov 23 15:39:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:39:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabcf04ecf6693f1f8bf650c6cfb4174eb6efadb73b89850d3ce3e6b910bde6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabcf04ecf6693f1f8bf650c6cfb4174eb6efadb73b89850d3ce3e6b910bde6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabcf04ecf6693f1f8bf650c6cfb4174eb6efadb73b89850d3ce3e6b910bde6b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:39:54 np0005532761 podman[81064]: 2025-11-23 20:39:54.131679385 +0000 UTC m=+0.023790074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:39:54 np0005532761 podman[81064]: 2025-11-23 20:39:54.255717251 +0000 UTC m=+0.147827940 container init 0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf (image=quay.io/ceph/ceph:v19, name=trusting_lalande, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 15:39:54 np0005532761 podman[81064]: 2025-11-23 20:39:54.262284342 +0000 UTC m=+0.154395011 container start 0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf (image=quay.io/ceph/ceph:v19, name=trusting_lalande, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:39:54 np0005532761 podman[81064]: 2025-11-23 20:39:54.266469714 +0000 UTC m=+0.158580403 container attach 0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf (image=quay.io/ceph/ceph:v19, name=trusting_lalande, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: Added host compute-2
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 15:39:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962075108' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 15:39:54 np0005532761 trusting_lalande[81080]: 
Nov 23 15:39:54 np0005532761 trusting_lalande[81080]: {"fsid":"03808be8-ae4a-5548-82e6-4a294f1bc627","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":55,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-23T20:38:56:367641+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-23T20:38:56.370068+0000","services":{}},"progress_events":{}}
Nov 23 15:39:54 np0005532761 podman[81064]: 2025-11-23 20:39:54.738626834 +0000 UTC m=+0.630737503 container died 0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf (image=quay.io/ceph/ceph:v19, name=trusting_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:39:54 np0005532761 systemd[1]: libpod-0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf.scope: Deactivated successfully.
Nov 23 15:39:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-dabcf04ecf6693f1f8bf650c6cfb4174eb6efadb73b89850d3ce3e6b910bde6b-merged.mount: Deactivated successfully.
Nov 23 15:39:54 np0005532761 podman[81064]: 2025-11-23 20:39:54.84137427 +0000 UTC m=+0.733484949 container remove 0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf (image=quay.io/ceph/ceph:v19, name=trusting_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 15:39:54 np0005532761 systemd[1]: libpod-conmon-0d8a90a7b91ad3d82174755a1b335b389165ee7e1e125f7efadb47c7fd031ecf.scope: Deactivated successfully.
Nov 23 15:39:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:39:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:39:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:40:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:40:15 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:40:15 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:40:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:16 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:40:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:40:16 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:16 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:16 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:16 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:16 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:16 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:40:16 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:40:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:40:17
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] No pools available
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 321bc602-7998-4993-9dcc-cc9d58a4510d (Updating crash deployment (+1 -> 2))
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:40:17.666+0000 7fd136425640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: service_name: mon
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: placement:
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  hosts:
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  - compute-0
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  - compute-1
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  - compute-2
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:40:17.667+0000 7fd136425640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: service_name: mgr
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: placement:
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  hosts:
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  - compute-0
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  - compute-1
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  - compute-2
Nov 23 15:40:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:40:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:40:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: Deploying daemon crash.compute-1 on compute-1
Nov 23 15:40:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:19 np0005532761 ceph-mon[74569]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 321bc602-7998-4993-9dcc-cc9d58a4510d (Updating crash deployment (+1 -> 2))
Nov 23 15:40:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 321bc602-7998-4993-9dcc-cc9d58a4510d (Updating crash deployment (+1 -> 2)) in 3 seconds
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:40:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:40:20 np0005532761 podman[81209]: 2025-11-23 20:40:20.881360911 +0000 UTC m=+0.049044631 container create a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:40:20 np0005532761 systemd[1]: Started libpod-conmon-a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36.scope.
Nov 23 15:40:20 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:20 np0005532761 podman[81209]: 2025-11-23 20:40:20.852558369 +0000 UTC m=+0.020242089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:21 np0005532761 podman[81209]: 2025-11-23 20:40:21.047935142 +0000 UTC m=+0.215618872 container init a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_noether, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:21 np0005532761 podman[81209]: 2025-11-23 20:40:21.054413727 +0000 UTC m=+0.222097417 container start a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:40:21 np0005532761 loving_noether[81225]: 167 167
Nov 23 15:40:21 np0005532761 podman[81209]: 2025-11-23 20:40:21.059413803 +0000 UTC m=+0.227097523 container attach a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:21 np0005532761 systemd[1]: libpod-a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36.scope: Deactivated successfully.
Nov 23 15:40:21 np0005532761 conmon[81225]: conmon a61342d8d05503af8388 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36.scope/container/memory.events
Nov 23 15:40:21 np0005532761 podman[81209]: 2025-11-23 20:40:21.061663034 +0000 UTC m=+0.229346744 container died a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:21 np0005532761 systemd[1]: var-lib-containers-storage-overlay-7fa9b90ffa77ec1458dd31bbae0214ef7cedaf52a97b7292b3e464917f7ca0c0-merged.mount: Deactivated successfully.
Nov 23 15:40:21 np0005532761 podman[81209]: 2025-11-23 20:40:21.112287708 +0000 UTC m=+0.279971398 container remove a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:40:21 np0005532761 systemd[1]: libpod-conmon-a61342d8d05503af8388b442ab53ece2bf6627f7036fe1d07839a01065075a36.scope: Deactivated successfully.
Nov 23 15:40:21 np0005532761 podman[81249]: 2025-11-23 20:40:21.280965925 +0000 UTC m=+0.050860401 container create 6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:21 np0005532761 systemd[1]: Started libpod-conmon-6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10.scope.
Nov 23 15:40:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:40:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:40:21 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a23ce291f169aae2e3af81816040c62b0d4f5a109bf7b15fa4767b784baff5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a23ce291f169aae2e3af81816040c62b0d4f5a109bf7b15fa4767b784baff5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a23ce291f169aae2e3af81816040c62b0d4f5a109bf7b15fa4767b784baff5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a23ce291f169aae2e3af81816040c62b0d4f5a109bf7b15fa4767b784baff5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a23ce291f169aae2e3af81816040c62b0d4f5a109bf7b15fa4767b784baff5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532761 podman[81249]: 2025-11-23 20:40:21.258976518 +0000 UTC m=+0.028871024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:21 np0005532761 podman[81249]: 2025-11-23 20:40:21.364824931 +0000 UTC m=+0.134719407 container init 6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mirzakhani, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:40:21 np0005532761 podman[81249]: 2025-11-23 20:40:21.371922874 +0000 UTC m=+0.141817350 container start 6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Nov 23 15:40:21 np0005532761 podman[81249]: 2025-11-23 20:40:21.375692856 +0000 UTC m=+0.145587322 container attach 6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mirzakhani, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:21 np0005532761 pensive_mirzakhani[81265]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:40:21 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:21 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:21 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 71c99843-04fc-447b-a9fd-4e17520a545c
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"} v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"} v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]': finished
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]': finished
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:22 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]': finished
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]': finished
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 23 15:40:22 np0005532761 lvm[81327]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:40:22 np0005532761 lvm[81327]: VG ceph_vg0 finished
Nov 23 15:40:22 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 2 completed events
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/803347647' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 23 15:40:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184528919' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: stderr: got monmap epoch 1
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: --> Creating keyring file for osd.1
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 23 15:40:22 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 71c99843-04fc-447b-a9fd-4e17520a545c --setuser ceph --setgroup ceph
Nov 23 15:40:23 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:24 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 23 15:40:24 np0005532761 ceph-mon[74569]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 23 15:40:25 np0005532761 python3[81610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:40:25 np0005532761 podman[81795]: 2025-11-23 20:40:25.295269553 +0000 UTC m=+0.065335625 container create 318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4 (image=quay.io/ceph/ceph:v19, name=distracted_vaughan, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:25 np0005532761 systemd[1]: Started libpod-conmon-318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4.scope.
Nov 23 15:40:25 np0005532761 podman[81795]: 2025-11-23 20:40:25.264307673 +0000 UTC m=+0.034373775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:40:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ea04023d8643e45888bffac9aa8c4a872f374ec2849ecccb8e4c0c1613ba1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ea04023d8643e45888bffac9aa8c4a872f374ec2849ecccb8e4c0c1613ba1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ea04023d8643e45888bffac9aa8c4a872f374ec2849ecccb8e4c0c1613ba1b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:25 np0005532761 podman[81795]: 2025-11-23 20:40:25.388882413 +0000 UTC m=+0.158948495 container init 318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4 (image=quay.io/ceph/ceph:v19, name=distracted_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:25 np0005532761 podman[81795]: 2025-11-23 20:40:25.401887636 +0000 UTC m=+0.171953738 container start 318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4 (image=quay.io/ceph/ceph:v19, name=distracted_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:40:25 np0005532761 podman[81795]: 2025-11-23 20:40:25.405926796 +0000 UTC m=+0.175992878 container attach 318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4 (image=quay.io/ceph/ceph:v19, name=distracted_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Nov 23 15:40:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 15:40:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3749114053' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 15:40:25 np0005532761 distracted_vaughan[81811]: 
Nov 23 15:40:25 np0005532761 distracted_vaughan[81811]: {"fsid":"03808be8-ae4a-5548-82e6-4a294f1bc627","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":87,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1763930422,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-23T20:38:56:367641+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-23T20:40:19.668770+0000","services":{}},"progress_events":{}}
Nov 23 15:40:25 np0005532761 systemd[1]: libpod-318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4.scope: Deactivated successfully.
Nov 23 15:40:25 np0005532761 podman[81795]: 2025-11-23 20:40:25.847926781 +0000 UTC m=+0.617992843 container died 318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4 (image=quay.io/ceph/ceph:v19, name=distracted_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:25 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f2ea04023d8643e45888bffac9aa8c4a872f374ec2849ecccb8e4c0c1613ba1b-merged.mount: Deactivated successfully.
Nov 23 15:40:25 np0005532761 podman[81795]: 2025-11-23 20:40:25.898473612 +0000 UTC m=+0.668539714 container remove 318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4 (image=quay.io/ceph/ceph:v19, name=distracted_vaughan, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 15:40:25 np0005532761 systemd[1]: libpod-conmon-318052afe182452b8b28abe3e45453bbda03c820d1dc5442cf99675252bfbab4.scope: Deactivated successfully.
Nov 23 15:40:26 np0005532761 pensive_mirzakhani[81265]: stderr: 2025-11-23T20:40:22.982+0000 7fc033eaa740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Nov 23 15:40:26 np0005532761 pensive_mirzakhani[81265]: stderr: 2025-11-23T20:40:23.248+0000 7fc033eaa740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 23 15:40:26 np0005532761 pensive_mirzakhani[81265]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 23 15:40:26 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 15:40:26 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 23 15:40:27 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:27 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:27 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:27 np0005532761 pensive_mirzakhani[81265]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 15:40:27 np0005532761 pensive_mirzakhani[81265]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 23 15:40:27 np0005532761 pensive_mirzakhani[81265]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 23 15:40:27 np0005532761 systemd[1]: libpod-6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10.scope: Deactivated successfully.
Nov 23 15:40:27 np0005532761 systemd[1]: libpod-6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10.scope: Consumed 2.249s CPU time.
Nov 23 15:40:27 np0005532761 podman[81249]: 2025-11-23 20:40:27.245583399 +0000 UTC m=+6.015477865 container died 6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mirzakhani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 15:40:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3a23ce291f169aae2e3af81816040c62b0d4f5a109bf7b15fa4767b784baff5f-merged.mount: Deactivated successfully.
Nov 23 15:40:27 np0005532761 podman[81249]: 2025-11-23 20:40:27.305087245 +0000 UTC m=+6.074981711 container remove 6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 15:40:27 np0005532761 systemd[1]: libpod-conmon-6c0a754ef02357269059652fa55ccd589be629fac1f802b68b7601cd093adb10.scope: Deactivated successfully.
Nov 23 15:40:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:27 np0005532761 podman[82437]: 2025-11-23 20:40:27.812365561 +0000 UTC m=+0.046095022 container create 3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:27 np0005532761 systemd[1]: Started libpod-conmon-3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63.scope.
Nov 23 15:40:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:27 np0005532761 podman[82437]: 2025-11-23 20:40:27.792147231 +0000 UTC m=+0.025876712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:27 np0005532761 podman[82437]: 2025-11-23 20:40:27.900235696 +0000 UTC m=+0.133965177 container init 3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:27 np0005532761 podman[82437]: 2025-11-23 20:40:27.907706058 +0000 UTC m=+0.141435519 container start 3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tharp, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 15:40:27 np0005532761 infallible_tharp[82453]: 167 167
Nov 23 15:40:27 np0005532761 systemd[1]: libpod-3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63.scope: Deactivated successfully.
Nov 23 15:40:27 np0005532761 podman[82437]: 2025-11-23 20:40:27.914002438 +0000 UTC m=+0.147731919 container attach 3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:27 np0005532761 podman[82437]: 2025-11-23 20:40:27.914556454 +0000 UTC m=+0.148285915 container died 3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8e354c06c803eb227802525b4a74beab472575c4851f0a7224f16a7693b77ffa-merged.mount: Deactivated successfully.
Nov 23 15:40:27 np0005532761 podman[82437]: 2025-11-23 20:40:27.984223065 +0000 UTC m=+0.217952526 container remove 3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_tharp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 23 15:40:27 np0005532761 systemd[1]: libpod-conmon-3cba830688b616e7a504913d9b4557d4652ea8a59c9696314bf319a8acec7d63.scope: Deactivated successfully.
Nov 23 15:40:28 np0005532761 podman[82476]: 2025-11-23 20:40:28.156647364 +0000 UTC m=+0.071636185 container create 57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:28 np0005532761 systemd[1]: Started libpod-conmon-57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311.scope.
Nov 23 15:40:28 np0005532761 podman[82476]: 2025-11-23 20:40:28.107576242 +0000 UTC m=+0.022565063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815d7333db4e27598ea005a47eac1b6580257c015e67dbda4b8b55974642567b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815d7333db4e27598ea005a47eac1b6580257c015e67dbda4b8b55974642567b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815d7333db4e27598ea005a47eac1b6580257c015e67dbda4b8b55974642567b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815d7333db4e27598ea005a47eac1b6580257c015e67dbda4b8b55974642567b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:28 np0005532761 podman[82476]: 2025-11-23 20:40:28.251762335 +0000 UTC m=+0.166751216 container init 57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lederberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:28 np0005532761 podman[82476]: 2025-11-23 20:40:28.259167676 +0000 UTC m=+0.174156497 container start 57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:40:28 np0005532761 podman[82476]: 2025-11-23 20:40:28.297471765 +0000 UTC m=+0.212460606 container attach 57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lederberg, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:40:28 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Nov 23 15:40:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]: {
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:    "1": [
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:        {
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "devices": [
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "/dev/loop3"
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            ],
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "lv_name": "ceph_lv0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "lv_size": "21470642176",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "name": "ceph_lv0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "tags": {
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.cluster_name": "ceph",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.crush_device_class": "",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.encrypted": "0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.osd_id": "1",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.type": "block",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.vdo": "0",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:                "ceph.with_tpm": "0"
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            },
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "type": "block",
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:            "vg_name": "ceph_vg0"
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:        }
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]:    ]
Nov 23 15:40:28 np0005532761 sweet_lederberg[82492]: }
Nov 23 15:40:28 np0005532761 systemd[1]: libpod-57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311.scope: Deactivated successfully.
Nov 23 15:40:28 np0005532761 podman[82476]: 2025-11-23 20:40:28.565678334 +0000 UTC m=+0.480667155 container died 57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:40:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-815d7333db4e27598ea005a47eac1b6580257c015e67dbda4b8b55974642567b-merged.mount: Deactivated successfully.
Nov 23 15:40:28 np0005532761 podman[82476]: 2025-11-23 20:40:28.647464633 +0000 UTC m=+0.562453454 container remove 57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lederberg, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 23 15:40:28 np0005532761 systemd[1]: libpod-conmon-57324015437e19b1e81c68a4e93a3a01ac7e39c0da96b51a2c9d87568e6da311.scope: Deactivated successfully.
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:40:28 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 23 15:40:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: Deploying daemon osd.0 on compute-1
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 15:40:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:29 np0005532761 podman[82603]: 2025-11-23 20:40:29.186672425 +0000 UTC m=+0.040169620 container create f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_taussig, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:29 np0005532761 systemd[1]: Started libpod-conmon-f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4.scope.
Nov 23 15:40:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:29 np0005532761 podman[82603]: 2025-11-23 20:40:29.262399281 +0000 UTC m=+0.115896476 container init f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_taussig, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:40:29 np0005532761 podman[82603]: 2025-11-23 20:40:29.169554401 +0000 UTC m=+0.023051626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:29 np0005532761 podman[82603]: 2025-11-23 20:40:29.268953149 +0000 UTC m=+0.122450334 container start f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Nov 23 15:40:29 np0005532761 jolly_taussig[82619]: 167 167
Nov 23 15:40:29 np0005532761 systemd[1]: libpod-f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4.scope: Deactivated successfully.
Nov 23 15:40:29 np0005532761 podman[82603]: 2025-11-23 20:40:29.283068992 +0000 UTC m=+0.136566187 container attach f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:29 np0005532761 podman[82603]: 2025-11-23 20:40:29.283611077 +0000 UTC m=+0.137108272 container died f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_taussig, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3458e4790303f4652fdb9ed5b5dceefef08ac6795a544fdbcb62ee82d7c0c9a8-merged.mount: Deactivated successfully.
Nov 23 15:40:29 np0005532761 podman[82603]: 2025-11-23 20:40:29.338169348 +0000 UTC m=+0.191666543 container remove f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_taussig, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:29 np0005532761 systemd[1]: libpod-conmon-f724fddcbae918ae6fb00dfa4e509d7ddea6c53a36de86c1e0c9d477a930fbd4.scope: Deactivated successfully.
Nov 23 15:40:29 np0005532761 podman[82652]: 2025-11-23 20:40:29.591719868 +0000 UTC m=+0.059393623 container create 4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 15:40:29 np0005532761 systemd[1]: Started libpod-conmon-4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36.scope.
Nov 23 15:40:29 np0005532761 podman[82652]: 2025-11-23 20:40:29.557739155 +0000 UTC m=+0.025412930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28bb85facc98c388b09316df14913bb72eab192d8a8e9f9edb961049359de4c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28bb85facc98c388b09316df14913bb72eab192d8a8e9f9edb961049359de4c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28bb85facc98c388b09316df14913bb72eab192d8a8e9f9edb961049359de4c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28bb85facc98c388b09316df14913bb72eab192d8a8e9f9edb961049359de4c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28bb85facc98c388b09316df14913bb72eab192d8a8e9f9edb961049359de4c8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532761 podman[82652]: 2025-11-23 20:40:29.804289117 +0000 UTC m=+0.271962872 container init 4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:40:29 np0005532761 podman[82652]: 2025-11-23 20:40:29.810601868 +0000 UTC m=+0.278275623 container start 4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:29 np0005532761 ceph-mon[74569]: Deploying daemon osd.1 on compute-0
Nov 23 15:40:29 np0005532761 podman[82652]: 2025-11-23 20:40:29.828630307 +0000 UTC m=+0.296304082 container attach 4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test[82668]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 23 15:40:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test[82668]:                            [--no-systemd] [--no-tmpfs]
Nov 23 15:40:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test[82668]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 23 15:40:29 np0005532761 systemd[1]: libpod-4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36.scope: Deactivated successfully.
Nov 23 15:40:29 np0005532761 podman[82652]: 2025-11-23 20:40:29.992320539 +0000 UTC m=+0.459994294 container died 4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:40:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-28bb85facc98c388b09316df14913bb72eab192d8a8e9f9edb961049359de4c8-merged.mount: Deactivated successfully.
Nov 23 15:40:30 np0005532761 podman[82652]: 2025-11-23 20:40:30.081647894 +0000 UTC m=+0.549321669 container remove 4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:40:30 np0005532761 systemd[1]: libpod-conmon-4f4b3f8a473c541d683ce5d09f808e753ec2f07e624ab4534a72d1bdae94dc36.scope: Deactivated successfully.
Nov 23 15:40:30 np0005532761 systemd[1]: Reloading.
Nov 23 15:40:30 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:30 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:30 np0005532761 systemd[1]: Reloading.
Nov 23 15:40:30 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:30 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:30 np0005532761 systemd[1]: Starting Ceph osd.1 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:40:31 np0005532761 podman[82831]: 2025-11-23 20:40:31.129113939 +0000 UTC m=+0.028244647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:31 np0005532761 podman[82831]: 2025-11-23 20:40:31.264069052 +0000 UTC m=+0.163199750 container create ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c92809fb028be88baa3e14e56bbb8827c71980770c7d61376c802a94b47de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c92809fb028be88baa3e14e56bbb8827c71980770c7d61376c802a94b47de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c92809fb028be88baa3e14e56bbb8827c71980770c7d61376c802a94b47de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c92809fb028be88baa3e14e56bbb8827c71980770c7d61376c802a94b47de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c92809fb028be88baa3e14e56bbb8827c71980770c7d61376c802a94b47de/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532761 podman[82831]: 2025-11-23 20:40:31.493023565 +0000 UTC m=+0.392154293 container init ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:31 np0005532761 podman[82831]: 2025-11-23 20:40:31.499927392 +0000 UTC m=+0.399058070 container start ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:31 np0005532761 podman[82831]: 2025-11-23 20:40:31.623832274 +0000 UTC m=+0.522962972 container attach ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532761 bash[82831]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532761 bash[82831]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:32 np0005532761 lvm[82927]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:40:32 np0005532761 lvm[82927]: VG ceph_vg0 finished
Nov 23 15:40:32 np0005532761 lvm[82929]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:40:32 np0005532761 lvm[82929]: VG ceph_vg0 finished
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:32 np0005532761 bash[82831]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 15:40:32 np0005532761 bash[82831]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 15:40:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate[82846]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 23 15:40:32 np0005532761 bash[82831]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 23 15:40:32 np0005532761 systemd[1]: libpod-ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f.scope: Deactivated successfully.
Nov 23 15:40:32 np0005532761 podman[82831]: 2025-11-23 20:40:32.695379703 +0000 UTC m=+1.594510381 container died ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:32 np0005532761 systemd[1]: libpod-ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f.scope: Consumed 1.285s CPU time.
Nov 23 15:40:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-938c92809fb028be88baa3e14e56bbb8827c71980770c7d61376c802a94b47de-merged.mount: Deactivated successfully.
Nov 23 15:40:32 np0005532761 podman[82831]: 2025-11-23 20:40:32.74391257 +0000 UTC m=+1.643043238 container remove ca2412eba4dac73ccdcc7e31ad2487a70fa185203c107c5fd23ffaea19e2ea1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:32 np0005532761 podman[83095]: 2025-11-23 20:40:32.921422148 +0000 UTC m=+0.038838585 container create 92c03300501cf916eb9be5826c79dbecbb4f4c6eb3c80cc0facb741dfbb3f287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7b168b6d9372a2226f3a227048af941cb24ac72a9fb9d3f3cf2afa8efe9aff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7b168b6d9372a2226f3a227048af941cb24ac72a9fb9d3f3cf2afa8efe9aff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7b168b6d9372a2226f3a227048af941cb24ac72a9fb9d3f3cf2afa8efe9aff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7b168b6d9372a2226f3a227048af941cb24ac72a9fb9d3f3cf2afa8efe9aff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7b168b6d9372a2226f3a227048af941cb24ac72a9fb9d3f3cf2afa8efe9aff/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532761 podman[83095]: 2025-11-23 20:40:32.981209261 +0000 UTC m=+0.098625718 container init 92c03300501cf916eb9be5826c79dbecbb4f4c6eb3c80cc0facb741dfbb3f287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:32 np0005532761 podman[83095]: 2025-11-23 20:40:32.987526431 +0000 UTC m=+0.104942868 container start 92c03300501cf916eb9be5826c79dbecbb4f4c6eb3c80cc0facb741dfbb3f287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 15:40:32 np0005532761 bash[83095]: 92c03300501cf916eb9be5826c79dbecbb4f4c6eb3c80cc0facb741dfbb3f287
Nov 23 15:40:32 np0005532761 podman[83095]: 2025-11-23 20:40:32.903961433 +0000 UTC m=+0.021377900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:32 np0005532761 systemd[1]: Started Ceph osd.1 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: pidfile_write: ignore empty --pid-file
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:33 np0005532761 podman[83227]: 2025-11-23 20:40:33.568124138 +0000 UTC m=+0.087519837 container create cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:33 np0005532761 podman[83227]: 2025-11-23 20:40:33.502555498 +0000 UTC m=+0.021951217 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:33 np0005532761 systemd[1]: Started libpod-conmon-cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896.scope.
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:33 np0005532761 podman[83227]: 2025-11-23 20:40:33.654053839 +0000 UTC m=+0.173449558 container init cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4fc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:33 np0005532761 podman[83227]: 2025-11-23 20:40:33.661516411 +0000 UTC m=+0.180912110 container start cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:33 np0005532761 podman[83227]: 2025-11-23 20:40:33.665311205 +0000 UTC m=+0.184706904 container attach cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:33 np0005532761 relaxed_nash[83243]: 167 167
Nov 23 15:40:33 np0005532761 systemd[1]: libpod-cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896.scope: Deactivated successfully.
Nov 23 15:40:33 np0005532761 podman[83227]: 2025-11-23 20:40:33.667176185 +0000 UTC m=+0.186571914 container died cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 15:40:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-839eceaee000d0c40917c78f0bc2d086ec08b7abe1206f414c82f1982b34b3a5-merged.mount: Deactivated successfully.
Nov 23 15:40:33 np0005532761 podman[83227]: 2025-11-23 20:40:33.709214176 +0000 UTC m=+0.228609875 container remove cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 23 15:40:33 np0005532761 systemd[1]: libpod-conmon-cebb9ae76aa0e1ae150575f3d3491cbc970924b3d91551ffe647c2c335485896.scope: Deactivated successfully.
Nov 23 15:40:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:33 np0005532761 podman[83270]: 2025-11-23 20:40:33.840364005 +0000 UTC m=+0.033276364 container create 394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:33 np0005532761 systemd[1]: Started libpod-conmon-394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec.scope.
Nov 23 15:40:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fa6333275386a7924acc7e0280849eea36aa3258f9919616a5a3ffa2812821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fa6333275386a7924acc7e0280849eea36aa3258f9919616a5a3ffa2812821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fa6333275386a7924acc7e0280849eea36aa3258f9919616a5a3ffa2812821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fa6333275386a7924acc7e0280849eea36aa3258f9919616a5a3ffa2812821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532761 podman[83270]: 2025-11-23 20:40:33.82618035 +0000 UTC m=+0.019092739 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:33 np0005532761 podman[83270]: 2025-11-23 20:40:33.935182079 +0000 UTC m=+0.128094458 container init 394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:33 np0005532761 podman[83270]: 2025-11-23 20:40:33.941183501 +0000 UTC m=+0.134095860 container start 394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:33 np0005532761 ceph-osd[83114]: bdev(0x559a23d4f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:33 np0005532761 podman[83270]: 2025-11-23 20:40:33.94774546 +0000 UTC m=+0.140657819 container attach 394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: load: jerasure load: lrc 
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:34 np0005532761 lvm[83369]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:40:34 np0005532761 lvm[83369]: VG ceph_vg0 finished
Nov 23 15:40:34 np0005532761 peaceful_williamson[83286]: {}
Nov 23 15:40:34 np0005532761 systemd[1]: libpod-394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec.scope: Deactivated successfully.
Nov 23 15:40:34 np0005532761 systemd[1]: libpod-394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec.scope: Consumed 1.037s CPU time.
Nov 23 15:40:34 np0005532761 podman[83270]: 2025-11-23 20:40:34.674929783 +0000 UTC m=+0.867842152 container died 394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:40:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a3fa6333275386a7924acc7e0280849eea36aa3258f9919616a5a3ffa2812821-merged.mount: Deactivated successfully.
Nov 23 15:40:34 np0005532761 podman[83270]: 2025-11-23 20:40:34.722573926 +0000 UTC m=+0.915486285 container remove 394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:34 np0005532761 systemd[1]: libpod-conmon-394e1333db0b518e0c507f89eed09d36ab8d47b66c417bac57f1f1407dc451ec.scope: Deactivated successfully.
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount shared_bdev_used = 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: RocksDB version: 7.9.2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Git sha 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: DB SUMMARY
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: DB Session ID:  LJWL2I9BWZATMASEW5KZ
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: CURRENT file:  CURRENT
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.error_if_exists: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.create_if_missing: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                     Options.env: 0x559a24bbbdc0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                Options.info_log: 0x559a24bbf7a0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                              Options.statistics: (nil)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.use_fsync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                              Options.db_log_dir: 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.write_buffer_manager: 0x559a24cb6a00
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.unordered_write: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.row_cache: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                              Options.wal_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.two_write_queues: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.wal_compression: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.atomic_flush: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_background_jobs: 4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_background_compactions: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_subcompactions: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.max_open_files: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Compression algorithms supported:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kZSTD supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kXpressCompression supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kZlibCompression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de49b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de49b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de49b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 83d8442e-ec3f-432e-af49-110516de13bb
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930435390423, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930435390643, "job": 1, "event": "recovery_finished"}
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: freelist init
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: freelist _read_cfg
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs umount
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bdev(0x559a24beb000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluefs mount shared_bdev_used = 4718592
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: RocksDB version: 7.9.2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Git sha 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: DB SUMMARY
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: DB Session ID:  LJWL2I9BWZATMASEW5KY
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: CURRENT file:  CURRENT
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.error_if_exists: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.create_if_missing: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                     Options.env: 0x559a24d5a2a0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                Options.info_log: 0x559a24bbf920
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                              Options.statistics: (nil)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.use_fsync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                              Options.db_log_dir: 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.write_buffer_manager: 0x559a24cb6a00
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.unordered_write: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.row_cache: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                              Options.wal_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.two_write_queues: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.wal_compression: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.atomic_flush: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_background_jobs: 4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_background_compactions: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_subcompactions: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.max_open_files: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Compression algorithms supported:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kZSTD supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kXpressCompression supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kZlibCompression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de49b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de49b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a24bbfac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a23de49b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 83d8442e-ec3f-432e-af49-110516de13bb
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930435675305, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930435754431, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930435, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "83d8442e-ec3f-432e-af49-110516de13bb", "db_session_id": "LJWL2I9BWZATMASEW5KY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930435793709, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930435, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "83d8442e-ec3f-432e-af49-110516de13bb", "db_session_id": "LJWL2I9BWZATMASEW5KY", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:40:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930435827113, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930435, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "83d8442e-ec3f-432e-af49-110516de13bb", "db_session_id": "LJWL2I9BWZATMASEW5KY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930435839640, "job": 1, "event": "recovery_finished"}
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 23 15:40:35 np0005532761 podman[83913]: 2025-11-23 20:40:35.909608429 +0000 UTC m=+0.103421278 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559a24d86000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: DB pointer 0x559a24d66000
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.4 total, 0.4 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.4 total, 0.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 460.80 MB usag
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 23 15:40:35 np0005532761 ceph-osd[83114]: _get_class not permitted to load lua
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: _get_class not permitted to load sdk
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: osd.1 0 load_pgs
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: osd.1 0 load_pgs opened 0 pgs
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: osd.1 0 log_to_monitors true
Nov 23 15:40:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1[83110]: 2025-11-23T20:40:36.008+0000 7f8389388740 -1 osd.1 0 log_to_monitors true
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 23 15:40:36 np0005532761 podman[83913]: 2025-11-23 20:40:36.03125497 +0000 UTC m=+0.225067809 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 23 15:40:36 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:36 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:36 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 23 15:40:36 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 23 15:40:37 np0005532761 ceph-osd[83114]: osd.1 0 done with init, starting boot process
Nov 23 15:40:37 np0005532761 ceph-osd[83114]: osd.1 0 start_boot
Nov 23 15:40:37 np0005532761 ceph-osd[83114]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 23 15:40:37 np0005532761 ceph-osd[83114]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 23 15:40:37 np0005532761 ceph-osd[83114]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 23 15:40:37 np0005532761 ceph-osd[83114]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 23 15:40:37 np0005532761 ceph-osd[83114]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:37 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:37 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:37 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:37 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:37 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:37 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:37 np0005532761 podman[84205]: 2025-11-23 20:40:37.476248094 +0000 UTC m=+0.020143827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:37 np0005532761 podman[84205]: 2025-11-23 20:40:37.645101737 +0000 UTC m=+0.188997460 container create 9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_cohen, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:37 np0005532761 systemd[1]: Started libpod-conmon-9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942.scope.
Nov 23 15:40:37 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:37 np0005532761 podman[84205]: 2025-11-23 20:40:37.829974944 +0000 UTC m=+0.373870657 container init 9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:40:37 np0005532761 podman[84205]: 2025-11-23 20:40:37.839907434 +0000 UTC m=+0.383803167 container start 9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_cohen, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:37 np0005532761 gallant_cohen[84221]: 167 167
Nov 23 15:40:37 np0005532761 systemd[1]: libpod-9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942.scope: Deactivated successfully.
Nov 23 15:40:37 np0005532761 podman[84205]: 2025-11-23 20:40:37.871444219 +0000 UTC m=+0.415339932 container attach 9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:37 np0005532761 podman[84205]: 2025-11-23 20:40:37.871949693 +0000 UTC m=+0.415845436 container died 9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:37 np0005532761 systemd[1]: var-lib-containers-storage-overlay-72f371c5244be0f8a7d3d4ef9e58fd2723db5e568169d2c89e8ecaac4752ba86-merged.mount: Deactivated successfully.
Nov 23 15:40:38 np0005532761 podman[84205]: 2025-11-23 20:40:38.063483611 +0000 UTC m=+0.607379344 container remove 9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:40:38 np0005532761 systemd[1]: libpod-conmon-9fbb684077675db5581ad24135ac9ff86d4071f784fbb3e4a33cda5bf0063942.scope: Deactivated successfully.
Nov 23 15:40:38 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:38 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:38 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:38 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:38 np0005532761 podman[84245]: 2025-11-23 20:40:38.240050452 +0000 UTC m=+0.065821057 container create 00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_meitner, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 15:40:38 np0005532761 podman[84245]: 2025-11-23 20:40:38.194836916 +0000 UTC m=+0.020607541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:38 np0005532761 systemd[1]: Started libpod-conmon-00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86.scope.
Nov 23 15:40:38 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd63f57b7c5c69243003be00d2ef1ddf1110c5a1cdaf009e416030139266b47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd63f57b7c5c69243003be00d2ef1ddf1110c5a1cdaf009e416030139266b47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd63f57b7c5c69243003be00d2ef1ddf1110c5a1cdaf009e416030139266b47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd63f57b7c5c69243003be00d2ef1ddf1110c5a1cdaf009e416030139266b47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:38 np0005532761 podman[84245]: 2025-11-23 20:40:38.397701181 +0000 UTC m=+0.223471836 container init 00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_meitner, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:38 np0005532761 podman[84245]: 2025-11-23 20:40:38.406386336 +0000 UTC m=+0.232156951 container start 00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_meitner, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:38 np0005532761 podman[84245]: 2025-11-23 20:40:38.420762637 +0000 UTC m=+0.246533262 container attach 00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]: [
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:    {
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "available": false,
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "being_replaced": false,
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "ceph_device_lvm": false,
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "lsm_data": {},
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "lvs": [],
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "path": "/dev/sr0",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "rejected_reasons": [
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "Has a FileSystem",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "Insufficient space (<5GB)"
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        ],
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        "sys_api": {
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "actuators": null,
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "device_nodes": [
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:                "sr0"
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            ],
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "devname": "sr0",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "human_readable_size": "482.00 KB",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "id_bus": "ata",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "model": "QEMU DVD-ROM",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "nr_requests": "2",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "parent": "/dev/sr0",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "partitions": {},
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "path": "/dev/sr0",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "removable": "1",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "rev": "2.5+",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "ro": "0",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "rotational": "1",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "sas_address": "",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "sas_device_handle": "",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "scheduler_mode": "mq-deadline",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "sectors": 0,
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "sectorsize": "2048",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "size": 493568.0,
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "support_discard": "2048",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "type": "disk",
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:            "vendor": "QEMU"
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:        }
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]:    }
Nov 23 15:40:39 np0005532761 unruffled_meitner[84261]: ]
Nov 23 15:40:39 np0005532761 systemd[1]: libpod-00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86.scope: Deactivated successfully.
Nov 23 15:40:39 np0005532761 podman[84245]: 2025-11-23 20:40:39.266868808 +0000 UTC m=+1.092639413 container died 00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_meitner, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ebd63f57b7c5c69243003be00d2ef1ddf1110c5a1cdaf009e416030139266b47-merged.mount: Deactivated successfully.
Nov 23 15:40:39 np0005532761 podman[84245]: 2025-11-23 20:40:39.571417302 +0000 UTC m=+1.397187907 container remove 00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:39 np0005532761 systemd[1]: libpod-conmon-00d19931294b59bb6003d6d15c996d8a3349c722150a66c83df1a6f96cd05a86.scope: Deactivated successfully.
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:40:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:40:39 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:40:40 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:40 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:40 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:40 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:40:40 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:40:41 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:41 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:41 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:41 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:40:42 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:42 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:42 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:42 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:42 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Nov 23 15:40:42 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 23 15:40:42 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:43 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:43 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:43 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:43 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:44 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:44 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:44 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:44 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:45 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:45 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:45 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:45 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:46 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:46 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:46 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:46 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.934 iops: 2031.119 elapsed_sec: 1.477
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: log_channel(cluster) log [WRN] : OSD bench result of 2031.118864 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 0 waiting for initial osdmap
Nov 23 15:40:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1[83110]: 2025-11-23T20:40:46.877+0000 7f8385b1e640 -1 osd.1 0 waiting for initial osdmap
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 7 check_osdmap_features require_osd_release unknown -> squid
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 7 set_numa_affinity not setting numa affinity
Nov 23 15:40:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-1[83110]: 2025-11-23T20:40:46.966+0000 7f8380933640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 15:40:46 np0005532761 ceph-osd[83114]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2449545263; not ready for session (expect reconnect)
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: OSD bench result of 2031.118864 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263] boot
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:47 np0005532761 ceph-osd[83114]: osd.1 8 state: booting -> active
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:40:47 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] creating mgr pool
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Nov 23 15:40:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:40:48 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/220289678; not ready for session (expect reconnect)
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:48 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678] boot
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263] boot
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:40:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Nov 23 15:40:49 np0005532761 ceph-osd[83114]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 23 15:40:49 np0005532761 ceph-osd[83114]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 23 15:40:49 np0005532761 ceph-osd[83114]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: OSD bench result of 6269.861471 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678] boot
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 23 15:40:49 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] creating main.db for devicehealth
Nov 23 15:40:49 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:40:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:40:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 15:40:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 23 15:40:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Nov 23 15:40:50 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 23 15:40:50 np0005532761 ceph-mon[74569]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 23 15:40:50 np0005532761 ceph-mon[74569]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 23 15:40:50 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.oyehye(active, since 92s)
Nov 23 15:40:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 15:40:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 15:40:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 15:40:56 np0005532761 python3[85357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:40:56 np0005532761 podman[85359]: 2025-11-23 20:40:56.305945223 +0000 UTC m=+0.056657838 container create 800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:56 np0005532761 systemd[1]: Started libpod-conmon-800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454.scope.
Nov 23 15:40:56 np0005532761 podman[85359]: 2025-11-23 20:40:56.271829527 +0000 UTC m=+0.022542172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:40:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da451c8c05b9d61d0e656c04d0e779aed21224185c98fa3fbde0c91875506e31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da451c8c05b9d61d0e656c04d0e779aed21224185c98fa3fbde0c91875506e31/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da451c8c05b9d61d0e656c04d0e779aed21224185c98fa3fbde0c91875506e31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:56 np0005532761 podman[85359]: 2025-11-23 20:40:56.413932453 +0000 UTC m=+0.164645068 container init 800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:56 np0005532761 podman[85359]: 2025-11-23 20:40:56.421557811 +0000 UTC m=+0.172270416 container start 800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:56 np0005532761 podman[85359]: 2025-11-23 20:40:56.425482487 +0000 UTC m=+0.176195322 container attach 800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Nov 23 15:40:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 15:40:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340333746' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 15:40:56 np0005532761 nostalgic_brattain[85375]: 
Nov 23 15:40:56 np0005532761 nostalgic_brattain[85375]: {"fsid":"03808be8-ae4a-5548-82e6-4a294f1bc627","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":118,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":2,"osd_up_since":1763930448,"num_in_osds":2,"osd_in_since":1763930422,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894627840,"bytes_avail":42046656512,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-11-23T20:38:56:367641+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-23T20:40:19.668770+0000","services":{}},"progress_events":{}}
Nov 23 15:40:56 np0005532761 systemd[1]: libpod-800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454.scope: Deactivated successfully.
Nov 23 15:40:56 np0005532761 podman[85359]: 2025-11-23 20:40:56.905548325 +0000 UTC m=+0.656261000 container died 800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 15:40:56 np0005532761 systemd[1]: var-lib-containers-storage-overlay-da451c8c05b9d61d0e656c04d0e779aed21224185c98fa3fbde0c91875506e31-merged.mount: Deactivated successfully.
Nov 23 15:40:56 np0005532761 podman[85359]: 2025-11-23 20:40:56.951321177 +0000 UTC m=+0.702033822 container remove 800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454 (image=quay.io/ceph/ceph:v19, name=nostalgic_brattain, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:40:56 np0005532761 systemd[1]: libpod-conmon-800f08cabd5f88e4d2edf6269e9f792f8633b34bd0e42acc46695583d2481454.scope: Deactivated successfully.
Nov 23 15:40:57 np0005532761 python3[85437]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:40:57 np0005532761 podman[85438]: 2025-11-23 20:40:57.472513832 +0000 UTC m=+0.046137054 container create ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4 (image=quay.io/ceph/ceph:v19, name=silly_leavitt, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:40:57 np0005532761 systemd[1]: Started libpod-conmon-ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4.scope.
Nov 23 15:40:57 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a2ab2658c13248145c42803471ce2c780099da0bf8670c5fde086b3e1e73b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a2ab2658c13248145c42803471ce2c780099da0bf8670c5fde086b3e1e73b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:57 np0005532761 podman[85438]: 2025-11-23 20:40:57.536517318 +0000 UTC m=+0.110140590 container init ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4 (image=quay.io/ceph/ceph:v19, name=silly_leavitt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 15:40:57 np0005532761 podman[85438]: 2025-11-23 20:40:57.545887502 +0000 UTC m=+0.119510724 container start ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4 (image=quay.io/ceph/ceph:v19, name=silly_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:57 np0005532761 podman[85438]: 2025-11-23 20:40:57.453451344 +0000 UTC m=+0.027074836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:40:57 np0005532761 podman[85438]: 2025-11-23 20:40:57.577625834 +0000 UTC m=+0.151249076 container attach ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4 (image=quay.io/ceph/ceph:v19, name=silly_leavitt, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:40:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:40:57 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:40:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 23 15:40:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:40:58 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:40:58 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Nov 23 15:40:58 np0005532761 silly_leavitt[85453]: pool 'vms' created
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 23 15:40:58 np0005532761 systemd[1]: libpod-ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4.scope: Deactivated successfully.
Nov 23 15:40:58 np0005532761 conmon[85453]: conmon ca07a7e3fd9ee892f0ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4.scope/container/memory.events
Nov 23 15:40:58 np0005532761 podman[85438]: 2025-11-23 20:40:58.796788468 +0000 UTC m=+1.370411690 container died ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4 (image=quay.io/ceph/ceph:v19, name=silly_leavitt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 15:40:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:40:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-65a2ab2658c13248145c42803471ce2c780099da0bf8670c5fde086b3e1e73b5-merged.mount: Deactivated successfully.
Nov 23 15:40:58 np0005532761 podman[85438]: 2025-11-23 20:40:58.836679871 +0000 UTC m=+1.410303103 container remove ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4 (image=quay.io/ceph/ceph:v19, name=silly_leavitt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:58 np0005532761 systemd[1]: libpod-conmon-ca07a7e3fd9ee892f0ee36daf5af1490dfad82af5680183ba6fbfdf9d9f048d4.scope: Deactivated successfully.
Nov 23 15:40:59 np0005532761 python3[85517]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:40:59 np0005532761 podman[85518]: 2025-11-23 20:40:59.177173761 +0000 UTC m=+0.059169707 container create 1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19 (image=quay.io/ceph/ceph:v19, name=angry_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Nov 23 15:40:59 np0005532761 systemd[1]: Started libpod-conmon-1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19.scope.
Nov 23 15:40:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:40:59 np0005532761 podman[85518]: 2025-11-23 20:40:59.139836987 +0000 UTC m=+0.021832953 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:40:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29de2e7bf3ad5d31d8ab7aa1e6f74d6096bedad68dea89c4d4e649e3a4607d25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29de2e7bf3ad5d31d8ab7aa1e6f74d6096bedad68dea89c4d4e649e3a4607d25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:59 np0005532761 podman[85518]: 2025-11-23 20:40:59.246864822 +0000 UTC m=+0.128860788 container init 1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19 (image=quay.io/ceph/ceph:v19, name=angry_dijkstra, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:59 np0005532761 podman[85518]: 2025-11-23 20:40:59.251824416 +0000 UTC m=+0.133820372 container start 1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19 (image=quay.io/ceph/ceph:v19, name=angry_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:59 np0005532761 podman[85518]: 2025-11-23 20:40:59.255395834 +0000 UTC m=+0.137391800 container attach 1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19 (image=quay.io/ceph/ceph:v19, name=angry_dijkstra, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:40:59 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:40:59 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:40:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v56: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Nov 23 15:40:59 np0005532761 angry_dijkstra[85533]: pool 'volumes' created
Nov 23 15:40:59 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Nov 23 15:40:59 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 13 pg[3.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:40:59 np0005532761 systemd[1]: libpod-1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19.scope: Deactivated successfully.
Nov 23 15:40:59 np0005532761 podman[85518]: 2025-11-23 20:40:59.812632626 +0000 UTC m=+0.694628572 container died 1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19 (image=quay.io/ceph/ceph:v19, name=angry_dijkstra, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:59 np0005532761 systemd[1]: var-lib-containers-storage-overlay-29de2e7bf3ad5d31d8ab7aa1e6f74d6096bedad68dea89c4d4e649e3a4607d25-merged.mount: Deactivated successfully.
Nov 23 15:40:59 np0005532761 podman[85518]: 2025-11-23 20:40:59.850478923 +0000 UTC m=+0.732474909 container remove 1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19 (image=quay.io/ceph/ceph:v19, name=angry_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 15:40:59 np0005532761 systemd[1]: libpod-conmon-1175cdf177ec5026d30b659045554b63f599c6526899f9e8abc1bb40adc83e19.scope: Deactivated successfully.
Nov 23 15:41:00 np0005532761 python3[85597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:00 np0005532761 podman[85598]: 2025-11-23 20:41:00.184891828 +0000 UTC m=+0.041135008 container create 22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9 (image=quay.io/ceph/ceph:v19, name=hardcore_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:00 np0005532761 systemd[1]: Started libpod-conmon-22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9.scope.
Nov 23 15:41:00 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2666f1dedb42c5024c93456aeed414af701dd994ec5ff05e09c3aa8f34e3b5e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2666f1dedb42c5024c93456aeed414af701dd994ec5ff05e09c3aa8f34e3b5e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:00 np0005532761 podman[85598]: 2025-11-23 20:41:00.258362431 +0000 UTC m=+0.114605631 container init 22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9 (image=quay.io/ceph/ceph:v19, name=hardcore_yonath, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:00 np0005532761 podman[85598]: 2025-11-23 20:41:00.166618722 +0000 UTC m=+0.022861932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:00 np0005532761 podman[85598]: 2025-11-23 20:41:00.264397166 +0000 UTC m=+0.120640346 container start 22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9 (image=quay.io/ceph/ceph:v19, name=hardcore_yonath, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 15:41:00 np0005532761 podman[85598]: 2025-11-23 20:41:00.269889104 +0000 UTC m=+0.126132304 container attach 22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9 (image=quay.io/ceph/ceph:v19, name=hardcore_yonath, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:00 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:00 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Nov 23 15:41:00 np0005532761 hardcore_yonath[85613]: pool 'backups' created
Nov 23 15:41:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Nov 23 15:41:00 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 14 pg[4.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:00 np0005532761 systemd[1]: libpod-22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9.scope: Deactivated successfully.
Nov 23 15:41:00 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:00 np0005532761 podman[85598]: 2025-11-23 20:41:00.823033316 +0000 UTC m=+0.679276496 container died 22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9 (image=quay.io/ceph/ceph:v19, name=hardcore_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:00 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2666f1dedb42c5024c93456aeed414af701dd994ec5ff05e09c3aa8f34e3b5e9-merged.mount: Deactivated successfully.
Nov 23 15:41:01 np0005532761 podman[85598]: 2025-11-23 20:41:01.020656449 +0000 UTC m=+0.876899629 container remove 22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9 (image=quay.io/ceph/ceph:v19, name=hardcore_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:01 np0005532761 systemd[1]: libpod-conmon-22c93012dc845734786b079b687bad3be10e24da36fea11fb42d9c1761782ef9.scope: Deactivated successfully.
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v59: 4 pgs: 1 active+clean, 3 unknown; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 15:41:01 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev a45bf329-7f45-49ff-9b56-13703646c4d8 (Updating mon deployment (+2 -> 3))
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:01 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 23 15:41:01 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 23 15:41:01 np0005532761 python3[85678]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:01 np0005532761 podman[85679]: 2025-11-23 20:41:01.324951096 +0000 UTC m=+0.021343520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:01 np0005532761 podman[85679]: 2025-11-23 20:41:01.505681681 +0000 UTC m=+0.202074085 container create 4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d (image=quay.io/ceph/ceph:v19, name=pensive_gauss, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 23 15:41:01 np0005532761 systemd[1]: Started libpod-conmon-4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d.scope.
Nov 23 15:41:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b704aee62a7ba72dbc81a91850e505f1c7a70cbe83f61ba39add46bf43d27e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b704aee62a7ba72dbc81a91850e505f1c7a70cbe83f61ba39add46bf43d27e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Nov 23 15:41:02 np0005532761 podman[85679]: 2025-11-23 20:41:02.012367901 +0000 UTC m=+0.708760325 container init 4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d (image=quay.io/ceph/ceph:v19, name=pensive_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:41:02 np0005532761 podman[85679]: 2025-11-23 20:41:02.019556146 +0000 UTC m=+0.715948560 container start 4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d (image=quay.io/ceph/ceph:v19, name=pensive_gauss, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 23 15:41:02 np0005532761 podman[85679]: 2025-11-23 20:41:02.140606312 +0000 UTC m=+0.836998736 container attach 4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d (image=quay.io/ceph/ceph:v19, name=pensive_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: Deploying daemon mon.compute-2 on compute-2
Nov 23 15:41:02 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 15 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:02 np0005532761 ceph-mgr[74869]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 23 15:41:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651014750' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v61: 4 pgs: 3 active+clean, 1 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 23 15:41:03 np0005532761 ceph-mon[74569]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 23 15:41:03 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1651014750' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651014750' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Nov 23 15:41:03 np0005532761 pensive_gauss[85694]: pool 'images' created
Nov 23 15:41:03 np0005532761 systemd[1]: libpod-4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d.scope: Deactivated successfully.
Nov 23 15:41:03 np0005532761 podman[85679]: 2025-11-23 20:41:03.455456393 +0000 UTC m=+2.151848837 container died 4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d (image=quay.io/ceph/ceph:v19, name=pensive_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:03 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Nov 23 15:41:03 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:04 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8b704aee62a7ba72dbc81a91850e505f1c7a70cbe83f61ba39add46bf43d27e4-merged.mount: Deactivated successfully.
Nov 23 15:41:04 np0005532761 podman[85679]: 2025-11-23 20:41:04.261079995 +0000 UTC m=+2.957472389 container remove 4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d (image=quay.io/ceph/ceph:v19, name=pensive_gauss, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:04 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 23 15:41:04 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 23 15:41:04 np0005532761 systemd[1]: libpod-conmon-4205cdc36bce67ad335a2a0638311a809a2ebf69a2aa379f3dbfe3acf9acc85d.scope: Deactivated successfully.
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 23 15:41:04 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2832495456; not ready for session (expect reconnect)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:04 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 23 15:41:04 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:04 np0005532761 python3[85760]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:04 np0005532761 podman[85761]: 2025-11-23 20:41:04.60579024 +0000 UTC m=+0.048247720 container create 6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2 (image=quay.io/ceph/ceph:v19, name=agitated_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:41:04 np0005532761 systemd[1]: Started libpod-conmon-6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2.scope.
Nov 23 15:41:04 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896cb7feec168b91edc262316470c220bbbb0c5af5ff758f442a6d5e6e62b6c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896cb7feec168b91edc262316470c220bbbb0c5af5ff758f442a6d5e6e62b6c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:04 np0005532761 podman[85761]: 2025-11-23 20:41:04.676472398 +0000 UTC m=+0.118929908 container init 6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2 (image=quay.io/ceph/ceph:v19, name=agitated_hypatia, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 15:41:04 np0005532761 podman[85761]: 2025-11-23 20:41:04.585119668 +0000 UTC m=+0.027577198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:04 np0005532761 podman[85761]: 2025-11-23 20:41:04.682720437 +0000 UTC m=+0.125177917 container start 6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2 (image=quay.io/ceph/ceph:v19, name=agitated_hypatia, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:41:04 np0005532761 podman[85761]: 2025-11-23 20:41:04.686645264 +0000 UTC m=+0.129102744 container attach 6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2 (image=quay.io/ceph/ceph:v19, name=agitated_hypatia, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v63: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:05 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2832495456; not ready for session (expect reconnect)
Nov 23 15:41:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:05 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 23 15:41:06 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:06 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 23 15:41:06 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2832495456; not ready for session (expect reconnect)
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:06 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 23 15:41:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v64: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:07 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:07 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 23 15:41:07 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2832495456; not ready for session (expect reconnect)
Nov 23 15:41:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:07 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 23 15:41:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:08 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 23 15:41:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:08 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 23 15:41:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:08 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2832495456; not ready for session (expect reconnect)
Nov 23 15:41:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:08 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v65: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2832495456; not ready for session (expect reconnect)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : monmap epoch 2
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T20:41:04.417115+0000
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : created 2025-11-23T20:38:54.371685+0000
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap 
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.oyehye(active, since 111s)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev a45bf329-7f45-49ff-9b56-13703646c4d8 (Updating mon deployment (+2 -> 3))
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event a45bf329-7f45-49ff-9b56-13703646c4d8 (Updating mon deployment (+2 -> 3)) in 8 seconds
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev b73c8b62-47f3-4a5d-9bcb-fb6ddf1cda19 (Updating mgr deployment (+2 -> 3))
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:41:09 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: Deploying daemon mon.compute-1 on compute-1
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0 calling monitor election
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-2 calling monitor election
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 23 15:41:09 np0005532761 ceph-mon[74569]:    application not enabled on pool 'vms'
Nov 23 15:41:09 np0005532761 ceph-mon[74569]:    application not enabled on pool 'volumes'
Nov 23 15:41:09 np0005532761 ceph-mon[74569]:    application not enabled on pool 'backups'
Nov 23 15:41:09 np0005532761 ceph-mon[74569]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:41:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Nov 23 15:41:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:10 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:10 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 23 15:41:10 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:10 np0005532761 ceph-mgr[74869]: mgr.server handle_report got status from non-daemon mon.compute-2
Nov 23 15:41:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:10.420+0000 7fd144441640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v67: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:11 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:11 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 23 15:41:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:12 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:12 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 23 15:41:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:12 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 3 completed events
Nov 23 15:41:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:41:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v68: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:13 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:13 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 23 15:41:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:14 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:14 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 23 15:41:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v69: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:15 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:15 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : monmap epoch 3
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T20:41:10.249176+0000
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : created 2025-11-23T20:38:54.371685+0000
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap 
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.oyehye(active, since 118s)
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 23 15:41:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 15:41:16 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:16 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 23 15:41:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v70: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: Deploying daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0 calling monitor election
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-2 calling monitor election
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 23 15:41:17 np0005532761 ceph-mon[74569]:    application not enabled on pool 'vms'
Nov 23 15:41:17 np0005532761 ceph-mon[74569]:    application not enabled on pool 'volumes'
Nov 23 15:41:17 np0005532761 ceph-mon[74569]:    application not enabled on pool 'backups'
Nov 23 15:41:17 np0005532761 ceph-mon[74569]:    application not enabled on pool 'images'
Nov 23 15:41:17 np0005532761 ceph-mon[74569]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 0a0d55ed-63c7-4406-ae7b-f6061665a44d (Global Recovery Event) in 14 seconds
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/28911399; not ready for session (expect reconnect)
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:41:17
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'volumes', 'images']
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:41:17 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Nov 23 15:41:17 np0005532761 agitated_hypatia[85776]: pool 'cephfs.cephfs.meta' created
Nov 23 15:41:17 np0005532761 systemd[1]: libpod-6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2.scope: Deactivated successfully.
Nov 23 15:41:17 np0005532761 podman[85761]: 2025-11-23 20:41:17.871048285 +0000 UTC m=+13.313505765 container died 6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2 (image=quay.io/ceph/ceph:v19, name=agitated_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Nov 23 15:41:18 np0005532761 ceph-mgr[74869]: mgr.server handle_report got status from non-daemon mon.compute-1
Nov 23 15:41:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:18.251+0000 7fd144441640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-1 calling monitor election
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:18 np0005532761 systemd[1]: var-lib-containers-storage-overlay-896cb7feec168b91edc262316470c220bbbb0c5af5ff758f442a6d5e6e62b6c3-merged.mount: Deactivated successfully.
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.kgyerp on compute-1
Nov 23 15:41:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.kgyerp on compute-1
Nov 23 15:41:18 np0005532761 podman[85761]: 2025-11-23 20:41:18.421996917 +0000 UTC m=+13.864454397 container remove 6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2 (image=quay.io/ceph/ceph:v19, name=agitated_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:18 np0005532761 systemd[1]: libpod-conmon-6ea6c9a29b740b8088e8d1b09f1fd26e9d66b3829f405f7aa3406c2ea435cdb2.scope: Deactivated successfully.
Nov 23 15:41:18 np0005532761 python3[85846]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:18 np0005532761 podman[85847]: 2025-11-23 20:41:18.823553294 +0000 UTC m=+0.056636638 container create 6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5 (image=quay.io/ceph/ceph:v19, name=beautiful_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 15:41:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 18 pg[6.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Nov 23 15:41:18 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev dd96287e-d8f1-4437-ad50-87613fd4341e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:41:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 19 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:18 np0005532761 systemd[1]: Started libpod-conmon-6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5.scope.
Nov 23 15:41:18 np0005532761 podman[85847]: 2025-11-23 20:41:18.797277431 +0000 UTC m=+0.030360815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:18 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a52a02b38b1b029202e119853b61eb490dd414614e9ac5f7dd71a4664a65fe5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a52a02b38b1b029202e119853b61eb490dd414614e9ac5f7dd71a4664a65fe5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:18 np0005532761 podman[85847]: 2025-11-23 20:41:18.93101931 +0000 UTC m=+0.164102704 container init 6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5 (image=quay.io/ceph/ceph:v19, name=beautiful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 15:41:18 np0005532761 podman[85847]: 2025-11-23 20:41:18.937431234 +0000 UTC m=+0.170514578 container start 6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5 (image=quay.io/ceph/ceph:v19, name=beautiful_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:41:18 np0005532761 podman[85847]: 2025-11-23 20:41:18.941089064 +0000 UTC m=+0.174172428 container attach 6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5 (image=quay.io/ceph/ceph:v19, name=beautiful_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:41:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v73: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: Deploying daemon mgr.compute-1.kgyerp on compute-1
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Nov 23 15:41:19 np0005532761 beautiful_einstein[85863]: pool 'cephfs.cephfs.data' created
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Nov 23 15:41:19 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 30bbcca5-465f-4969-a647-d60e186a8fb3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:41:19 np0005532761 systemd[1]: libpod-6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5.scope: Deactivated successfully.
Nov 23 15:41:19 np0005532761 conmon[85863]: conmon 6908c33cf7c6221f71ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5.scope/container/memory.events
Nov 23 15:41:19 np0005532761 podman[85847]: 2025-11-23 20:41:19.90373522 +0000 UTC m=+1.136818554 container died 6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5 (image=quay.io/ceph/ceph:v19, name=beautiful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 15:41:19 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4a52a02b38b1b029202e119853b61eb490dd414614e9ac5f7dd71a4664a65fe5-merged.mount: Deactivated successfully.
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:19 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev b73c8b62-47f3-4a5d-9bcb-fb6ddf1cda19 (Updating mgr deployment (+2 -> 3))
Nov 23 15:41:19 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event b73c8b62-47f3-4a5d-9bcb-fb6ddf1cda19 (Updating mgr deployment (+2 -> 3)) in 10 seconds
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:19 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 412d4588-bdb0-48a5-a94a-85df3b97a8dc (Updating crash deployment (+1 -> 3))
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:41:19 np0005532761 podman[85847]: 2025-11-23 20:41:19.965039389 +0000 UTC m=+1.198122723 container remove 6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5 (image=quay.io/ceph/ceph:v19, name=beautiful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:41:19 np0005532761 systemd[1]: libpod-conmon-6908c33cf7c6221f71ce5f30b06edb4131d32d1a0c2b507c63c29bd9130568e5.scope: Deactivated successfully.
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:19 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 23 15:41:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 23 15:41:20 np0005532761 python3[85929]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:41:20 np0005532761 podman[85930]: 2025-11-23 20:41:20.40558165 +0000 UTC m=+0.056062320 container create f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83 (image=quay.io/ceph/ceph:v19, name=beautiful_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:20 np0005532761 systemd[1]: Started libpod-conmon-f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83.scope.
Nov 23 15:41:20 np0005532761 podman[85930]: 2025-11-23 20:41:20.37923966 +0000 UTC m=+0.029720420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:20 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c4edcf82710ff970752cb79273cc6f4496122d3722d843e71720c54ad1ddb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c4edcf82710ff970752cb79273cc6f4496122d3722d843e71720c54ad1ddb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:20 np0005532761 podman[85930]: 2025-11-23 20:41:20.507140347 +0000 UTC m=+0.157621047 container init f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83 (image=quay.io/ceph/ceph:v19, name=beautiful_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:20 np0005532761 podman[85930]: 2025-11-23 20:41:20.513048534 +0000 UTC m=+0.163529204 container start f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83 (image=quay.io/ceph/ceph:v19, name=beautiful_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:41:20 np0005532761 podman[85930]: 2025-11-23 20:41:20.516408234 +0000 UTC m=+0.166888924 container attach f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83 (image=quay.io/ceph/ceph:v19, name=beautiful_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Nov 23 15:41:20 np0005532761 beautiful_solomon[85945]: enabled application 'rbd' on pool 'vms'
Nov 23 15:41:20 np0005532761 systemd[1]: libpod-f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83.scope: Deactivated successfully.
Nov 23 15:41:20 np0005532761 podman[85930]: 2025-11-23 20:41:20.982853143 +0000 UTC m=+0.633333833 container died f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83 (image=quay.io/ceph/ceph:v19, name=beautiful_solomon, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Nov 23 15:41:20 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev bb923682-b26f-4716-8624-3e76823a18ca (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:41:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v76: 38 pgs: 33 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532761 systemd[1]: var-lib-containers-storage-overlay-25c4edcf82710ff970752cb79273cc6f4496122d3722d843e71720c54ad1ddb1-merged.mount: Deactivated successfully.
Nov 23 15:41:21 np0005532761 podman[85930]: 2025-11-23 20:41:21.218683676 +0000 UTC m=+0.869164376 container remove f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83 (image=quay.io/ceph/ceph:v19, name=beautiful_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:41:21 np0005532761 systemd[1]: libpod-conmon-f399d77538b079196b124915fecd98aec193da1a0053b3d0f3b5ec8290ec3a83.scope: Deactivated successfully.
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: Deploying daemon crash.compute-2 on compute-2
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532761 python3[86010]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:21 np0005532761 podman[86011]: 2025-11-23 20:41:21.6110886 +0000 UTC m=+0.055289210 container create 324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a (image=quay.io/ceph/ceph:v19, name=zen_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 15:41:21 np0005532761 systemd[1]: Started libpod-conmon-324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a.scope.
Nov 23 15:41:21 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ee24eb47b0c2bc38b4cee744d93b1cfd2de9778c62d3c446640b4033b2da52/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ee24eb47b0c2bc38b4cee744d93b1cfd2de9778c62d3c446640b4033b2da52/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:21 np0005532761 podman[86011]: 2025-11-23 20:41:21.584715239 +0000 UTC m=+0.028915899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:21 np0005532761 podman[86011]: 2025-11-23 20:41:21.69090781 +0000 UTC m=+0.135108420 container init 324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a (image=quay.io/ceph/ceph:v19, name=zen_lamport, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:21 np0005532761 podman[86011]: 2025-11-23 20:41:21.696133258 +0000 UTC m=+0.140333868 container start 324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a (image=quay.io/ceph/ceph:v19, name=zen_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:41:21 np0005532761 podman[86011]: 2025-11-23 20:41:21.700122104 +0000 UTC m=+0.144322714 container attach 324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a (image=quay.io/ceph/ceph:v19, name=zen_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Nov 23 15:41:21 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 842deaac-409a-4888-a514-afb58617ce5c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev dd96287e-d8f1-4437-ad50-87613fd4341e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event dd96287e-d8f1-4437-ad50-87613fd4341e (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 30bbcca5-465f-4969-a647-d60e186a8fb3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 30bbcca5-465f-4969-a647-d60e186a8fb3 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev bb923682-b26f-4716-8624-3e76823a18ca (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event bb923682-b26f-4716-8624-3e76823a18ca (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 842deaac-409a-4888-a514-afb58617ce5c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 23 15:41:21 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 842deaac-409a-4888-a514-afb58617ce5c (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Nov 23 15:41:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=22 pruub=10.842017174s) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active pruub 56.829341888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 22 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22 pruub=12.397294044s) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active pruub 58.384639740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 22 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22 pruub=12.397294044s) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown pruub 58.384639740s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=22 pruub=10.842017174s) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown pruub 56.829341888s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 412d4588-bdb0-48a5-a94a-85df3b97a8dc (Updating crash deployment (+1 -> 3))
Nov 23 15:41:22 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 412d4588-bdb0-48a5-a94a-85df3b97a8dc (Updating crash deployment (+1 -> 3)) in 2 seconds
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 10 completed events
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz started
Nov 23 15:41:22 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mgr.compute-2.jtkauz 192.168.122.102:0/2086279490; not ready for session (expect reconnect)
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532761 podman[86139]: 2025-11-23 20:41:22.733905773 +0000 UTC m=+0.055481485 container create 54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 15:41:22 np0005532761 systemd[75910]: Starting Mark boot as successful...
Nov 23 15:41:22 np0005532761 systemd[1]: Started libpod-conmon-54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf.scope.
Nov 23 15:41:22 np0005532761 systemd[75910]: Finished Mark boot as successful.
Nov 23 15:41:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:22 np0005532761 podman[86139]: 2025-11-23 20:41:22.713660205 +0000 UTC m=+0.035235897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:22 np0005532761 podman[86139]: 2025-11-23 20:41:22.81063472 +0000 UTC m=+0.132210422 container init 54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_curie, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:22 np0005532761 podman[86139]: 2025-11-23 20:41:22.817566954 +0000 UTC m=+0.139142666 container start 54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:41:22 np0005532761 podman[86139]: 2025-11-23 20:41:22.821961161 +0000 UTC m=+0.143536893 container attach 54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:41:22 np0005532761 vigorous_curie[86158]: 167 167
Nov 23 15:41:22 np0005532761 systemd[1]: libpod-54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf.scope: Deactivated successfully.
Nov 23 15:41:22 np0005532761 podman[86139]: 2025-11-23 20:41:22.825619278 +0000 UTC m=+0.147194980 container died 54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_curie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ac14b16f4923955cedd142bdbe34ef533a2f1261f1df5183923bf65d65c5dcd5-merged.mount: Deactivated successfully.
Nov 23 15:41:22 np0005532761 podman[86139]: 2025-11-23 20:41:22.874443515 +0000 UTC m=+0.196019207 container remove 54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_curie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 15:41:22 np0005532761 systemd[1]: libpod-conmon-54b5cefc8a29223e6c527c93b9d106dd81de5f88e75b9c60ce766bb60df9dcaf.scope: Deactivated successfully.
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Nov 23 15:41:22 np0005532761 zen_lamport[86026]: enabled application 'rbd' on pool 'volumes'
Nov 23 15:41:22 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.18( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.19( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1e( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1f( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.17( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.10( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.16( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.11( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.15( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.12( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.14( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.13( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.13( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.14( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.12( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.15( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.11( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.16( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.10( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.f( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.17( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.8( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.e( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.9( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.d( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.a( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.c( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.b( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.c( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.b( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.a( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.7( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.7( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.6( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.d( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.5( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.2( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.5( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.2( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.6( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.3( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.4( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.4( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.3( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.8( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.f( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.9( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.e( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1a( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1c( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1b( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1d( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1b( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1c( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1a( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1d( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1e( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.19( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1f( empty local-lis/les=13/14 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.18( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.18( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.19( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.10( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.17( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.11( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.12( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.14( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.13( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.13( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.16( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.12( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.10( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.15( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.17( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.9( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.d( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=22/23 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.b( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.7( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.6( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.6( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.2( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.4( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.4( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.5( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.2( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1a( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1c( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1e( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.19( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1f( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=13/13 les/c/f=14/14/0 sis=22) [1] r=0 lpr=22 pi=[13,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.18( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 23 pg[4.7( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:23 np0005532761 systemd[1]: libpod-324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a.scope: Deactivated successfully.
Nov 23 15:41:23 np0005532761 podman[86011]: 2025-11-23 20:41:23.015522883 +0000 UTC m=+1.459723483 container died 324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a (image=quay.io/ceph/ceph:v19, name=zen_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:41:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-93ee24eb47b0c2bc38b4cee744d93b1cfd2de9778c62d3c446640b4033b2da52-merged.mount: Deactivated successfully.
Nov 23 15:41:23 np0005532761 podman[86011]: 2025-11-23 20:41:23.059004998 +0000 UTC m=+1.503205608 container remove 324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a (image=quay.io/ceph/ceph:v19, name=zen_lamport, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:23 np0005532761 systemd[1]: libpod-conmon-324fa012938e8701bc65df4b8fad34d8ae167ed1a860f70afc6c0b5b677c831a.scope: Deactivated successfully.
Nov 23 15:41:23 np0005532761 podman[86182]: 2025-11-23 20:41:23.109721095 +0000 UTC m=+0.082043581 container create 50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:41:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v79: 100 pgs: 32 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:23 np0005532761 systemd[1]: Started libpod-conmon-50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844.scope.
Nov 23 15:41:23 np0005532761 podman[86182]: 2025-11-23 20:41:23.089484997 +0000 UTC m=+0.061807513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ed260cd0d11a00165f8b022d699bf03a270f93ee95728f4df51645d1b61d09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ed260cd0d11a00165f8b022d699bf03a270f93ee95728f4df51645d1b61d09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ed260cd0d11a00165f8b022d699bf03a270f93ee95728f4df51645d1b61d09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ed260cd0d11a00165f8b022d699bf03a270f93ee95728f4df51645d1b61d09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ed260cd0d11a00165f8b022d699bf03a270f93ee95728f4df51645d1b61d09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:23 np0005532761 podman[86182]: 2025-11-23 20:41:23.222203642 +0000 UTC m=+0.194526158 container init 50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 15:41:23 np0005532761 podman[86182]: 2025-11-23 20:41:23.231762236 +0000 UTC m=+0.204084722 container start 50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:23 np0005532761 podman[86182]: 2025-11-23 20:41:23.237204141 +0000 UTC m=+0.209526647 container attach 50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wiles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"} v 0)
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 15:41:23 np0005532761 python3[86238]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:23 np0005532761 podman[86241]: 2025-11-23 20:41:23.422140233 +0000 UTC m=+0.042579572 container create 3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d (image=quay.io/ceph/ceph:v19, name=agitated_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:41:23 np0005532761 systemd[1]: Started libpod-conmon-3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d.scope.
Nov 23 15:41:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a8f156cc85aa592e37f05675477bf813d4cc69b2ca33da2a5bbf4ec88fb94f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a8f156cc85aa592e37f05675477bf813d4cc69b2ca33da2a5bbf4ec88fb94f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:23 np0005532761 podman[86241]: 2025-11-23 20:41:23.497827363 +0000 UTC m=+0.118266592 container init 3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d (image=quay.io/ceph/ceph:v19, name=agitated_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 15:41:23 np0005532761 podman[86241]: 2025-11-23 20:41:23.404979077 +0000 UTC m=+0.025418326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:23 np0005532761 podman[86241]: 2025-11-23 20:41:23.506140204 +0000 UTC m=+0.126579443 container start 3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d (image=quay.io/ceph/ceph:v19, name=agitated_euclid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:23 np0005532761 podman[86241]: 2025-11-23 20:41:23.511884876 +0000 UTC m=+0.132324275 container attach 3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d (image=quay.io/ceph/ceph:v19, name=agitated_euclid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:23 np0005532761 intelligent_wiles[86218]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:41:23 np0005532761 intelligent_wiles[86218]: --> All data devices are unavailable
Nov 23 15:41:23 np0005532761 systemd[1]: libpod-50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844.scope: Deactivated successfully.
Nov 23 15:41:23 np0005532761 podman[86182]: 2025-11-23 20:41:23.564173505 +0000 UTC m=+0.536496001 container died 50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wiles, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:23 np0005532761 podman[86182]: 2025-11-23 20:41:23.621051486 +0000 UTC m=+0.593373972 container remove 50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:23 np0005532761 systemd[1]: libpod-conmon-50404b9cc3be682ba0463e9d7b486698a227bd27f9093ff510127570be91b844.scope: Deactivated successfully.
Nov 23 15:41:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f3ed260cd0d11a00165f8b022d699bf03a270f93ee95728f4df51645d1b61d09-merged.mount: Deactivated successfully.
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 23 15:41:23 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Nov 23 15:41:23 np0005532761 agitated_euclid[86262]: enabled application 'rbd' on pool 'backups'
Nov 23 15:41:23 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 23 15:41:24 np0005532761 systemd[1]: libpod-3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d.scope: Deactivated successfully.
Nov 23 15:41:24 np0005532761 podman[86241]: 2025-11-23 20:41:24.005699833 +0000 UTC m=+0.626139062 container died 3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d (image=quay.io/ceph/ceph:v19, name=agitated_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 15:41:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-14a8f156cc85aa592e37f05675477bf813d4cc69b2ca33da2a5bbf4ec88fb94f-merged.mount: Deactivated successfully.
Nov 23 15:41:24 np0005532761 podman[86241]: 2025-11-23 20:41:24.061713461 +0000 UTC m=+0.682152700 container remove 3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d (image=quay.io/ceph/ceph:v19, name=agitated_euclid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:24 np0005532761 systemd[1]: libpod-conmon-3b55256143314be63799fd9ba7a233aa2db0f6292340eb7eff50265581082e2d.scope: Deactivated successfully.
Nov 23 15:41:24 np0005532761 podman[86406]: 2025-11-23 20:41:24.176275564 +0000 UTC m=+0.039413278 container create 1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:24 np0005532761 systemd[1]: Started libpod-conmon-1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c.scope.
Nov 23 15:41:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:24 np0005532761 podman[86406]: 2025-11-23 20:41:24.15807503 +0000 UTC m=+0.021212774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:24 np0005532761 podman[86406]: 2025-11-23 20:41:24.2634914 +0000 UTC m=+0.126629134 container init 1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 15:41:24 np0005532761 podman[86406]: 2025-11-23 20:41:24.277736308 +0000 UTC m=+0.140874032 container start 1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:24 np0005532761 podman[86406]: 2025-11-23 20:41:24.281930039 +0000 UTC m=+0.145067763 container attach 1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noyce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 23 15:41:24 np0005532761 systemd[1]: libpod-1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c.scope: Deactivated successfully.
Nov 23 15:41:24 np0005532761 silly_noyce[86447]: 167 167
Nov 23 15:41:24 np0005532761 podman[86406]: 2025-11-23 20:41:24.28496062 +0000 UTC m=+0.148098364 container died 1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:41:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a809523f1f2a73c69c04e5194cfd66c5d9d1e6d0002c0b2f8170a100e7d6d08c-merged.mount: Deactivated successfully.
Nov 23 15:41:24 np0005532761 podman[86406]: 2025-11-23 20:41:24.336421707 +0000 UTC m=+0.199559431 container remove 1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:41:24 np0005532761 systemd[1]: libpod-conmon-1ad03a0d4a34ee1e0d784a4a57f854332ccdf585070012f63abc391097b8841c.scope: Deactivated successfully.
Nov 23 15:41:24 np0005532761 python3[86449]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:24 np0005532761 podman[86469]: 2025-11-23 20:41:24.414701746 +0000 UTC m=+0.045843539 container create 15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf (image=quay.io/ceph/ceph:v19, name=upbeat_blackburn, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"} v 0)
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]: dispatch
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 23 15:41:24 np0005532761 systemd[1]: Started libpod-conmon-15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf.scope.
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]': finished
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:24 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455ac01e03450af0950623333677c8a1e56db1036c40f8e8b1d776c8fd54d93c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455ac01e03450af0950623333677c8a1e56db1036c40f8e8b1d776c8fd54d93c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:24 np0005532761 podman[86469]: 2025-11-23 20:41:24.389587309 +0000 UTC m=+0.020729072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:24 np0005532761 podman[86469]: 2025-11-23 20:41:24.494635119 +0000 UTC m=+0.125776972 container init 15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf (image=quay.io/ceph/ceph:v19, name=upbeat_blackburn, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:24 np0005532761 podman[86469]: 2025-11-23 20:41:24.500197017 +0000 UTC m=+0.131338760 container start 15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf (image=quay.io/ceph/ceph:v19, name=upbeat_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 15:41:24 np0005532761 podman[86469]: 2025-11-23 20:41:24.504255565 +0000 UTC m=+0.135397338 container attach 15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf (image=quay.io/ceph/ceph:v19, name=upbeat_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:24 np0005532761 podman[86489]: 2025-11-23 20:41:24.528099348 +0000 UTC m=+0.054575381 container create e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 24 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=9.006848335s) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active pruub 57.548686981s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=9.006848335s) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown pruub 57.548686981s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.c( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.d( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.e( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.f( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.10( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.11( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.12( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.13( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.14( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.15( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.16( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.17( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.18( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.19( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.1( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.4( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.5( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.6( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.7( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.1a( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.1b( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.1c( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.1d( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.1e( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.1f( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.8( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.9( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.a( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.b( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.2( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 25 pg[5.3( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:24 np0005532761 systemd[1]: Started libpod-conmon-e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6.scope.
Nov 23 15:41:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e3de47b56cf55ce0361c5222260aacb99b496e08d5dcc49665007ca66f4ed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e3de47b56cf55ce0361c5222260aacb99b496e08d5dcc49665007ca66f4ed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e3de47b56cf55ce0361c5222260aacb99b496e08d5dcc49665007ca66f4ed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e3de47b56cf55ce0361c5222260aacb99b496e08d5dcc49665007ca66f4ed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:24 np0005532761 podman[86489]: 2025-11-23 20:41:24.601739865 +0000 UTC m=+0.128215918 container init e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:24 np0005532761 podman[86489]: 2025-11-23 20:41:24.5119871 +0000 UTC m=+0.038463163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:24 np0005532761 podman[86489]: 2025-11-23 20:41:24.610261091 +0000 UTC m=+0.136737124 container start e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 15:41:24 np0005532761 podman[86489]: 2025-11-23 20:41:24.614296768 +0000 UTC m=+0.140772791 container attach e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Nov 23 15:41:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]: {
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:    "1": [
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:        {
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "devices": [
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "/dev/loop3"
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            ],
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "lv_name": "ceph_lv0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "lv_size": "21470642176",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "name": "ceph_lv0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "tags": {
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.cluster_name": "ceph",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.crush_device_class": "",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.encrypted": "0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.osd_id": "1",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.type": "block",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.vdo": "0",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:                "ceph.with_tpm": "0"
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            },
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "type": "block",
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:            "vg_name": "ceph_vg0"
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:        }
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]:    ]
Nov 23 15:41:24 np0005532761 jovial_davinci[86509]: }
Nov 23 15:41:24 np0005532761 systemd[1]: libpod-e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6.scope: Deactivated successfully.
Nov 23 15:41:24 np0005532761 podman[86489]: 2025-11-23 20:41:24.899695078 +0000 UTC m=+0.426171111 container died e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 23 15:41:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d4e3de47b56cf55ce0361c5222260aacb99b496e08d5dcc49665007ca66f4ed5-merged.mount: Deactivated successfully.
Nov 23 15:41:24 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 23 15:41:24 np0005532761 podman[86489]: 2025-11-23 20:41:24.942283549 +0000 UTC m=+0.468759582 container remove e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:24 np0005532761 systemd[1]: libpod-conmon-e8714d0dd936b959120d59a2b2c1edb8e1733251449d0483f4375f6bc35fa5a6.scope: Deactivated successfully.
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.102:0/1014258786' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]: dispatch
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]: dispatch
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]': finished
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 23 15:41:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v82: 131 pgs: 31 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 23 15:41:25 np0005532761 podman[86640]: 2025-11-23 20:41:25.557583842 +0000 UTC m=+0.060411895 container create 154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_knuth, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Nov 23 15:41:25 np0005532761 upbeat_blackburn[86491]: enabled application 'rbd' on pool 'images'
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:25 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.11( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 podman[86469]: 2025-11-23 20:41:25.594715099 +0000 UTC m=+1.225856852 container died 15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf (image=quay.io/ceph/ceph:v19, name=upbeat_blackburn, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.1f( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.1e( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.12( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.15( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 systemd[1]: Started libpod-conmon-154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556.scope.
Nov 23 15:41:25 np0005532761 systemd[1]: libpod-15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf.scope: Deactivated successfully.
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.17( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.14( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.13( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.10( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.16( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.9( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.8( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.a( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.c( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.6( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.d( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.1( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.3( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.0( empty local-lis/les=24/26 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.4( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.5( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.e( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.1c( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.f( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.1d( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.1a( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.1b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.18( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.7( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.19( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 26 pg[5.2( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:25 np0005532761 systemd[1]: var-lib-containers-storage-overlay-455ac01e03450af0950623333677c8a1e56db1036c40f8e8b1d776c8fd54d93c-merged.mount: Deactivated successfully.
Nov 23 15:41:25 np0005532761 podman[86640]: 2025-11-23 20:41:25.539140732 +0000 UTC m=+0.041968805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:25 np0005532761 podman[86469]: 2025-11-23 20:41:25.637954807 +0000 UTC m=+1.269096560 container remove 15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf (image=quay.io/ceph/ceph:v19, name=upbeat_blackburn, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:41:25 np0005532761 systemd[1]: libpod-conmon-15b228a65b4bb8b222a923a91caa170f4c477b1fc65b40d62d1b6f72937c10cf.scope: Deactivated successfully.
Nov 23 15:41:25 np0005532761 podman[86640]: 2025-11-23 20:41:25.651110916 +0000 UTC m=+0.153938979 container init 154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:25 np0005532761 podman[86640]: 2025-11-23 20:41:25.65800885 +0000 UTC m=+0.160836903 container start 154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:25 np0005532761 podman[86640]: 2025-11-23 20:41:25.661395579 +0000 UTC m=+0.164223632 container attach 154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:41:25 np0005532761 dazzling_knuth[86658]: 167 167
Nov 23 15:41:25 np0005532761 systemd[1]: libpod-154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556.scope: Deactivated successfully.
Nov 23 15:41:25 np0005532761 podman[86640]: 2025-11-23 20:41:25.662487518 +0000 UTC m=+0.165315571 container died 154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:25 np0005532761 systemd[1]: var-lib-containers-storage-overlay-251a12b98a81411c9fdb0e202a1890971e49a8998d1f97524ceba1b44d9b9171-merged.mount: Deactivated successfully.
Nov 23 15:41:25 np0005532761 podman[86640]: 2025-11-23 20:41:25.702067739 +0000 UTC m=+0.204895792 container remove 154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:41:25 np0005532761 systemd[1]: libpod-conmon-154fcc77e26c96ac34c05ca5ea11bd187bba3b359b31c99013a858e63aca3556.scope: Deactivated successfully.
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 23 15:41:25 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 23 15:41:25 np0005532761 podman[86719]: 2025-11-23 20:41:25.904168128 +0000 UTC m=+0.065703326 container create 88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:25 np0005532761 systemd[1]: Started libpod-conmon-88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d.scope.
Nov 23 15:41:25 np0005532761 python3[86714]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:25 np0005532761 podman[86719]: 2025-11-23 20:41:25.866705703 +0000 UTC m=+0.028240961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7a8b885ad148c013cfa274568d2774deeb216098f634bb0f6d9e3e6c2f1028/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7a8b885ad148c013cfa274568d2774deeb216098f634bb0f6d9e3e6c2f1028/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7a8b885ad148c013cfa274568d2774deeb216098f634bb0f6d9e3e6c2f1028/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7a8b885ad148c013cfa274568d2774deeb216098f634bb0f6d9e3e6c2f1028/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:25 np0005532761 podman[86719]: 2025-11-23 20:41:25.985447077 +0000 UTC m=+0.146982255 container init 88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:25 np0005532761 podman[86719]: 2025-11-23 20:41:25.99200575 +0000 UTC m=+0.153540908 container start 88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:25 np0005532761 podman[86719]: 2025-11-23 20:41:25.995847563 +0000 UTC m=+0.157382741 container attach 88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:41:26 np0005532761 podman[86738]: 2025-11-23 20:41:26.019286876 +0000 UTC m=+0.041763851 container create c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17 (image=quay.io/ceph/ceph:v19, name=confident_ishizaka, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:26 np0005532761 systemd[1]: Started libpod-conmon-c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17.scope.
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:26 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mgr.compute-1.kgyerp 192.168.122.101:0/1201675343; not ready for session (expect reconnect)
Nov 23 15:41:26 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8eaecaa0ac253d04ea1e4e8e933e82d85dd57a9ca8d8143f818064bf6d95d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8eaecaa0ac253d04ea1e4e8e933e82d85dd57a9ca8d8143f818064bf6d95d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:26 np0005532761 podman[86738]: 2025-11-23 20:41:26.004270786 +0000 UTC m=+0.026747761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:26 np0005532761 podman[86738]: 2025-11-23 20:41:26.10944924 +0000 UTC m=+0.131926225 container init c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17 (image=quay.io/ceph/ceph:v19, name=confident_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:26 np0005532761 podman[86738]: 2025-11-23 20:41:26.116354263 +0000 UTC m=+0.138831228 container start c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17 (image=quay.io/ceph/ceph:v19, name=confident_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:26 np0005532761 podman[86738]: 2025-11-23 20:41:26.120158525 +0000 UTC m=+0.142635510 container attach c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17 (image=quay.io/ceph/ceph:v19, name=confident_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp started
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 23 15:41:26 np0005532761 lvm[86851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:41:26 np0005532761 lvm[86851]: VG ceph_vg0 finished
Nov 23 15:41:26 np0005532761 exciting_northcutt[86735]: {}
Nov 23 15:41:26 np0005532761 systemd[1]: libpod-88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d.scope: Deactivated successfully.
Nov 23 15:41:26 np0005532761 systemd[1]: libpod-88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d.scope: Consumed 1.264s CPU time.
Nov 23 15:41:26 np0005532761 podman[86719]: 2025-11-23 20:41:26.785868246 +0000 UTC m=+0.947403494 container died 88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5e7a8b885ad148c013cfa274568d2774deeb216098f634bb0f6d9e3e6c2f1028-merged.mount: Deactivated successfully.
Nov 23 15:41:26 np0005532761 podman[86719]: 2025-11-23 20:41:26.844667128 +0000 UTC m=+1.006202306 container remove 88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:26 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 23 15:41:26 np0005532761 systemd[1]: libpod-conmon-88abc3d1f27aa87925df3b9f9082decad1498f2b01770401e30336fb662f140d.scope: Deactivated successfully.
Nov 23 15:41:26 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:41:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:27 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from mgr.compute-1.kgyerp 192.168.122.101:0/1201675343; not ready for session (expect reconnect)
Nov 23 15:41:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v84: 131 pgs: 31 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"} v 0)
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Nov 23 15:41:27 np0005532761 confident_ishizaka[86758]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:27 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:27 np0005532761 systemd[1]: libpod-c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17.scope: Deactivated successfully.
Nov 23 15:41:27 np0005532761 podman[86738]: 2025-11-23 20:41:27.245871565 +0000 UTC m=+1.268348520 container died c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17 (image=quay.io/ceph/ceph:v19, name=confident_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a2a8eaecaa0ac253d04ea1e4e8e933e82d85dd57a9ca8d8143f818064bf6d95d-merged.mount: Deactivated successfully.
Nov 23 15:41:27 np0005532761 podman[86738]: 2025-11-23 20:41:27.295573046 +0000 UTC m=+1.318050041 container remove c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17 (image=quay.io/ceph/ceph:v19, name=confident_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:27 np0005532761 systemd[1]: libpod-conmon-c33dab16e164ed8508a961c5252580ad8d80e24d1eb9768d578fb3d01fe88e17.scope: Deactivated successfully.
Nov 23 15:41:27 np0005532761 python3[86903]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:27 np0005532761 podman[86904]: 2025-11-23 20:41:27.671316265 +0000 UTC m=+0.050903742 container create dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6 (image=quay.io/ceph/ceph:v19, name=naughty_visvesvaraya, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:41:27 np0005532761 systemd[1]: Started libpod-conmon-dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6.scope.
Nov 23 15:41:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:27 np0005532761 podman[86904]: 2025-11-23 20:41:27.651287164 +0000 UTC m=+0.030874651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10e2d95149ded8ae8d1f7f1d25cc81d0786ef981e8bb4cd2ccaa30a5b77771b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10e2d95149ded8ae8d1f7f1d25cc81d0786ef981e8bb4cd2ccaa30a5b77771b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:27 np0005532761 podman[86904]: 2025-11-23 20:41:27.765309472 +0000 UTC m=+0.144896959 container init dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6 (image=quay.io/ceph/ceph:v19, name=naughty_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:27 np0005532761 podman[86904]: 2025-11-23 20:41:27.772929684 +0000 UTC m=+0.152517181 container start dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6 (image=quay.io/ceph/ceph:v19, name=naughty_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:41:27 np0005532761 podman[86904]: 2025-11-23 20:41:27.777256879 +0000 UTC m=+0.156844356 container attach dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6 (image=quay.io/ceph/ceph:v19, name=naughty_visvesvaraya, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 15:41:27 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 23 15:41:27 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:28 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Nov 23 15:41:28 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Nov 23 15:41:28 np0005532761 naughty_visvesvaraya[86919]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:28 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:28 np0005532761 systemd[1]: libpod-dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6.scope: Deactivated successfully.
Nov 23 15:41:28 np0005532761 conmon[86919]: conmon dc9f727d0299bc3d669d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6.scope/container/memory.events
Nov 23 15:41:28 np0005532761 podman[86904]: 2025-11-23 20:41:28.970848551 +0000 UTC m=+1.350436008 container died dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6 (image=quay.io/ceph/ceph:v19, name=naughty_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 15:41:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b10e2d95149ded8ae8d1f7f1d25cc81d0786ef981e8bb4cd2ccaa30a5b77771b-merged.mount: Deactivated successfully.
Nov 23 15:41:29 np0005532761 podman[86904]: 2025-11-23 20:41:29.015822226 +0000 UTC m=+1.395409683 container remove dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6 (image=quay.io/ceph/ceph:v19, name=naughty_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:41:29 np0005532761 systemd[1]: libpod-conmon-dc9f727d0299bc3d669df585d86adfd355498b680fca188640a3461e0089adc6.scope: Deactivated successfully.
Nov 23 15:41:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v87: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:29 np0005532761 python3[87029]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1f( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.621056557s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.601501465s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1f( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.621006966s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.601501465s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.027401924s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007965088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.620900154s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.601486206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.027382851s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007965088s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.620882034s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.601486206s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.10( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.622011185s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.602706909s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.10( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.621986389s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.602706909s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.026142120s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007507324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.025452614s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.006816864s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.026126862s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007507324s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.025434494s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.006816864s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023770332s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007114410s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023748398s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007114410s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023135185s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007156372s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.15( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618649483s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.602706909s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023113251s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007156372s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.15( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618630409s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.602706909s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023026466s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007167816s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022979736s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007167816s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023168564s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007404327s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023151398s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007404327s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022877693s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007179260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022853851s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007179260s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.619615555s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604026794s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.619600296s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604026794s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.619510651s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604034424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022889137s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007442474s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022861481s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007408142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.619475365s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604034424s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022870064s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007442474s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022825241s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007408142s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022924423s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007617950s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022909164s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007617950s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023771286s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008514404s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023756027s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008514404s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023021698s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007907867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023001671s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007896423s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023006439s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007907867s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022973061s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007904053s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022954941s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007904053s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022933960s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007896423s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022861481s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007976532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022839546s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.007980347s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022844315s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007976532s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022823334s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.007980347s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023064613s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008346558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023048401s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008346558s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023253441s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008567810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.023207664s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008567810s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618848801s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604270935s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022952080s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008529663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022979736s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008605957s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022895813s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008529663s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.7( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.619409561s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.605056763s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618648529s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604270935s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022932053s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008605957s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.7( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.619342804s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.605056763s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022839546s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008716583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022769928s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008647919s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022823334s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008716583s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022750854s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008647919s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.2( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.621058464s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.606964111s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.2( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.621041298s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.606964111s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022674561s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008762360s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618324280s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604454041s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022658348s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008762360s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022715569s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008857727s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.e( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618268013s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604423523s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618305206s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604454041s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022695541s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008857727s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.4( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618043900s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604339600s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.e( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618058205s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604423523s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1c( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.617984772s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604438782s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1c( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.617949486s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604438782s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022273064s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008831024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022254944s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008831024s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.617891312s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604522705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022197723s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008850098s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.617875099s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604522705s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022180557s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008850098s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618124008s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604820251s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022185326s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008888245s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.1b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618109703s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604820251s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022143364s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.008899689s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022152901s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008888245s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.026578903s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.013366699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.022129059s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.008899689s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.618014336s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 65.604835510s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.026562691s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.013366699s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.617998123s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604835510s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.026438713s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 63.013374329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=9.026424408s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.013374329s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 29 pg[5.4( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29 pruub=11.617993355s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.604339600s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:30 np0005532761 python3[87100]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930489.6981685-37360-223655383857927/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:30 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:30 np0005532761 python3[87202]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: Cluster is now healthy
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:30 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Nov 23 15:41:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 23 15:41:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:31 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 23 15:41:31 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 23 15:41:31 np0005532761 python3[87277]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930490.6911182-37374-80385889312505/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a98850ed33ef95e83a3c0fc80b5132750bbd2974 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:41:31 np0005532761 python3[87327]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:31 np0005532761 podman[87328]: 2025-11-23 20:41:31.764873644 +0000 UTC m=+0.062907082 container create 9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d (image=quay.io/ceph/ceph:v19, name=infallible_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:31 np0005532761 systemd[1]: Started libpod-conmon-9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d.scope.
Nov 23 15:41:31 np0005532761 podman[87328]: 2025-11-23 20:41:31.732956946 +0000 UTC m=+0.030990464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4694133a6bc60a3558c66f4ed1a4f4b8da3f99ec68be4f50da85adb10a859586/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4694133a6bc60a3558c66f4ed1a4f4b8da3f99ec68be4f50da85adb10a859586/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4694133a6bc60a3558c66f4ed1a4f4b8da3f99ec68be4f50da85adb10a859586/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:31 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 23 15:41:31 np0005532761 podman[87328]: 2025-11-23 20:41:31.855054949 +0000 UTC m=+0.153088387 container init 9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d (image=quay.io/ceph/ceph:v19, name=infallible_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 15:41:31 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 23 15:41:31 np0005532761 podman[87328]: 2025-11-23 20:41:31.865311881 +0000 UTC m=+0.163345299 container start 9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d (image=quay.io/ceph/ceph:v19, name=infallible_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:31 np0005532761 podman[87328]: 2025-11-23 20:41:31.871529167 +0000 UTC m=+0.169562595 container attach 9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d (image=quay.io/ceph/ceph:v19, name=infallible_mirzakhani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:32 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 23 15:41:32 np0005532761 ceph-mon[74569]: Deploying daemon osd.2 on compute-2
Nov 23 15:41:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 23 15:41:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 15:41:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 23 15:41:32 np0005532761 infallible_mirzakhani[87343]: 
Nov 23 15:41:32 np0005532761 infallible_mirzakhani[87343]: [global]
Nov 23 15:41:32 np0005532761 infallible_mirzakhani[87343]: #011fsid = 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:41:32 np0005532761 infallible_mirzakhani[87343]: #011mon_host = 192.168.122.100
Nov 23 15:41:32 np0005532761 systemd[1]: libpod-9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d.scope: Deactivated successfully.
Nov 23 15:41:32 np0005532761 podman[87328]: 2025-11-23 20:41:32.227721517 +0000 UTC m=+0.525754945 container died 9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d (image=quay.io/ceph/ceph:v19, name=infallible_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4694133a6bc60a3558c66f4ed1a4f4b8da3f99ec68be4f50da85adb10a859586-merged.mount: Deactivated successfully.
Nov 23 15:41:32 np0005532761 podman[87328]: 2025-11-23 20:41:32.265580863 +0000 UTC m=+0.563614271 container remove 9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d (image=quay.io/ceph/ceph:v19, name=infallible_mirzakhani, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:32 np0005532761 systemd[1]: libpod-conmon-9c9312bc4a1ce66e80611005913b638f598a063b2fc68563e9f317317919e85d.scope: Deactivated successfully.
Nov 23 15:41:32 np0005532761 python3[87404]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:32 np0005532761 podman[87405]: 2025-11-23 20:41:32.651071701 +0000 UTC m=+0.056965093 container create cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13 (image=quay.io/ceph/ceph:v19, name=nostalgic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 15:41:32 np0005532761 systemd[1]: Started libpod-conmon-cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13.scope.
Nov 23 15:41:32 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44ecf49c246bc93b25537c92152bc8148c9dfbb32487d6715c98c2570eaa184/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44ecf49c246bc93b25537c92152bc8148c9dfbb32487d6715c98c2570eaa184/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44ecf49c246bc93b25537c92152bc8148c9dfbb32487d6715c98c2570eaa184/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:32 np0005532761 podman[87405]: 2025-11-23 20:41:32.72138968 +0000 UTC m=+0.127283082 container init cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13 (image=quay.io/ceph/ceph:v19, name=nostalgic_wilson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:32 np0005532761 podman[87405]: 2025-11-23 20:41:32.729139455 +0000 UTC m=+0.135032857 container start cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13 (image=quay.io/ceph/ceph:v19, name=nostalgic_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:32 np0005532761 podman[87405]: 2025-11-23 20:41:32.634769589 +0000 UTC m=+0.040663011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:32 np0005532761 podman[87405]: 2025-11-23 20:41:32.733131731 +0000 UTC m=+0.139025193 container attach cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13 (image=quay.io/ceph/ceph:v19, name=nostalgic_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:41:32 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 23 15:41:32 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 23 15:41:33 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 15:41:33 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 23 15:41:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v91: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Nov 23 15:41:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/475116719' entity='client.admin' 
Nov 23 15:41:33 np0005532761 nostalgic_wilson[87420]: set ssl_option
Nov 23 15:41:33 np0005532761 systemd[1]: libpod-cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13.scope: Deactivated successfully.
Nov 23 15:41:33 np0005532761 podman[87405]: 2025-11-23 20:41:33.214916538 +0000 UTC m=+0.620809960 container died cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13 (image=quay.io/ceph/ceph:v19, name=nostalgic_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:41:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c44ecf49c246bc93b25537c92152bc8148c9dfbb32487d6715c98c2570eaa184-merged.mount: Deactivated successfully.
Nov 23 15:41:33 np0005532761 podman[87405]: 2025-11-23 20:41:33.250884354 +0000 UTC m=+0.656777756 container remove cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13 (image=quay.io/ceph/ceph:v19, name=nostalgic_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 15:41:33 np0005532761 systemd[1]: libpod-conmon-cb5374273c75a64599db3661ca982a7386b4e5490e32496fb6f4d950de009e13.scope: Deactivated successfully.
Nov 23 15:41:33 np0005532761 python3[87481]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:33 np0005532761 podman[87482]: 2025-11-23 20:41:33.605833511 +0000 UTC m=+0.046291360 container create 7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1 (image=quay.io/ceph/ceph:v19, name=funny_rhodes, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:33 np0005532761 systemd[1]: Started libpod-conmon-7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1.scope.
Nov 23 15:41:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f15dc35df75cf0b82031fa840686aeb83d6636b8f94e96c2e322796c0e5919f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f15dc35df75cf0b82031fa840686aeb83d6636b8f94e96c2e322796c0e5919f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f15dc35df75cf0b82031fa840686aeb83d6636b8f94e96c2e322796c0e5919f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:33 np0005532761 podman[87482]: 2025-11-23 20:41:33.671110775 +0000 UTC m=+0.111568634 container init 7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1 (image=quay.io/ceph/ceph:v19, name=funny_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:41:33 np0005532761 podman[87482]: 2025-11-23 20:41:33.676229652 +0000 UTC m=+0.116687491 container start 7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1 (image=quay.io/ceph/ceph:v19, name=funny_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 15:41:33 np0005532761 podman[87482]: 2025-11-23 20:41:33.679632282 +0000 UTC m=+0.120090141 container attach 7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1 (image=quay.io/ceph/ceph:v19, name=funny_rhodes, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:33 np0005532761 podman[87482]: 2025-11-23 20:41:33.587302319 +0000 UTC m=+0.027760168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:33 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 23 15:41:33 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 23 15:41:34 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/475116719' entity='client.admin' 
Nov 23 15:41:34 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:41:34 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:41:34 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:41:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 23 15:41:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:34 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 23 15:41:34 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 23 15:41:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 23 15:41:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:34 np0005532761 funny_rhodes[87497]: Scheduled rgw.rgw update...
Nov 23 15:41:34 np0005532761 funny_rhodes[87497]: Scheduled ingress.rgw.default update...
Nov 23 15:41:34 np0005532761 systemd[1]: libpod-7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1.scope: Deactivated successfully.
Nov 23 15:41:34 np0005532761 podman[87482]: 2025-11-23 20:41:34.161938513 +0000 UTC m=+0.602396342 container died 7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1 (image=quay.io/ceph/ceph:v19, name=funny_rhodes, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5f15dc35df75cf0b82031fa840686aeb83d6636b8f94e96c2e322796c0e5919f-merged.mount: Deactivated successfully.
Nov 23 15:41:34 np0005532761 podman[87482]: 2025-11-23 20:41:34.334989589 +0000 UTC m=+0.775447418 container remove 7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1 (image=quay.io/ceph/ceph:v19, name=funny_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:41:34 np0005532761 systemd[1]: libpod-conmon-7bf0fed0234dadbad42230a69b8d94c13e3348dd63eb562657feb8eaefb14bc1.scope: Deactivated successfully.
Nov 23 15:41:34 np0005532761 python3[87609]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:41:34 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 23 15:41:34 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 23 15:41:35 np0005532761 python3[87680]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930494.4878633-37393-202035987265092/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:41:35 np0005532761 ceph-mon[74569]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:41:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:35 np0005532761 ceph-mon[74569]: Saving service ingress.rgw.default spec with placement count:2
Nov 23 15:41:35 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:35 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 23 15:41:35 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:36 np0005532761 python3[87730]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:36 np0005532761 podman[87731]: 2025-11-23 20:41:36.393053353 +0000 UTC m=+0.055087234 container create 28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73 (image=quay.io/ceph/ceph:v19, name=charming_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 15:41:36 np0005532761 systemd[1]: Started libpod-conmon-28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73.scope.
Nov 23 15:41:36 np0005532761 podman[87731]: 2025-11-23 20:41:36.365281755 +0000 UTC m=+0.027315686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:36 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878119e70df94a9a59878ab908a5321caf530136f325f7294ac74d78af709cce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878119e70df94a9a59878ab908a5321caf530136f325f7294ac74d78af709cce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878119e70df94a9a59878ab908a5321caf530136f325f7294ac74d78af709cce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:36 np0005532761 podman[87731]: 2025-11-23 20:41:36.495277908 +0000 UTC m=+0.157311879 container init 28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73 (image=quay.io/ceph/ceph:v19, name=charming_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:36 np0005532761 podman[87731]: 2025-11-23 20:41:36.501697979 +0000 UTC m=+0.163731850 container start 28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73 (image=quay.io/ceph/ceph:v19, name=charming_swanson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:41:36 np0005532761 podman[87731]: 2025-11-23 20:41:36.512355751 +0000 UTC m=+0.174389652 container attach 28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73 (image=quay.io/ceph/ceph:v19, name=charming_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:36 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 23 15:41:36 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service node-exporter spec with placement *
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Nov 23 15:41:36 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 23 15:41:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:36 np0005532761 charming_swanson[87747]: Scheduled node-exporter update...
Nov 23 15:41:36 np0005532761 charming_swanson[87747]: Scheduled grafana update...
Nov 23 15:41:36 np0005532761 charming_swanson[87747]: Scheduled prometheus update...
Nov 23 15:41:36 np0005532761 charming_swanson[87747]: Scheduled alertmanager update...
Nov 23 15:41:36 np0005532761 systemd[1]: libpod-28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73.scope: Deactivated successfully.
Nov 23 15:41:36 np0005532761 podman[87731]: 2025-11-23 20:41:36.911838553 +0000 UTC m=+0.573872424 container died 28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73 (image=quay.io/ceph/ceph:v19, name=charming_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:41:36 np0005532761 systemd[1]: var-lib-containers-storage-overlay-878119e70df94a9a59878ab908a5321caf530136f325f7294ac74d78af709cce-merged.mount: Deactivated successfully.
Nov 23 15:41:36 np0005532761 podman[87731]: 2025-11-23 20:41:36.953648852 +0000 UTC m=+0.615682733 container remove 28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73 (image=quay.io/ceph/ceph:v19, name=charming_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:36 np0005532761 systemd[1]: libpod-conmon-28df5fb729dba021437303866e55e3cdaa530470f5b1c0c046ab8ecc8b037d73.scope: Deactivated successfully.
Nov 23 15:41:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v93: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:37 np0005532761 python3[87809]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:37 np0005532761 podman[87810]: 2025-11-23 20:41:37.504309199 +0000 UTC m=+0.053215224 container create fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3 (image=quay.io/ceph/ceph:v19, name=youthful_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:41:37 np0005532761 systemd[1]: Started libpod-conmon-fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3.scope.
Nov 23 15:41:37 np0005532761 podman[87810]: 2025-11-23 20:41:37.47837112 +0000 UTC m=+0.027277225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:37 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6f2904224186128baba7a700b86a7ac60e4e4bad416f2a8ccb8deb5dab653c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6f2904224186128baba7a700b86a7ac60e4e4bad416f2a8ccb8deb5dab653c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6f2904224186128baba7a700b86a7ac60e4e4bad416f2a8ccb8deb5dab653c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:37 np0005532761 podman[87810]: 2025-11-23 20:41:37.594437733 +0000 UTC m=+0.143343848 container init fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3 (image=quay.io/ceph/ceph:v19, name=youthful_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:41:37 np0005532761 podman[87810]: 2025-11-23 20:41:37.601937512 +0000 UTC m=+0.150843527 container start fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3 (image=quay.io/ceph/ceph:v19, name=youthful_lehmann, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 15:41:37 np0005532761 podman[87810]: 2025-11-23 20:41:37.605691132 +0000 UTC m=+0.154597247 container attach fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3 (image=quay.io/ceph/ceph:v19, name=youthful_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 15:41:37 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Nov 23 15:41:37 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: Saving service node-exporter spec with placement *
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: Saving service grafana spec with placement compute-0;count:1
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: Saving service prometheus spec with placement compute-0;count:1
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: Saving service alertmanager spec with placement compute-0;count:1
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Nov 23 15:41:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/209025710' entity='client.admin' 
Nov 23 15:41:37 np0005532761 systemd[1]: libpod-fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3.scope: Deactivated successfully.
Nov 23 15:41:37 np0005532761 podman[87810]: 2025-11-23 20:41:37.980894107 +0000 UTC m=+0.529800132 container died fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3 (image=quay.io/ceph/ceph:v19, name=youthful_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:38 np0005532761 systemd[1]: var-lib-containers-storage-overlay-fa6f2904224186128baba7a700b86a7ac60e4e4bad416f2a8ccb8deb5dab653c-merged.mount: Deactivated successfully.
Nov 23 15:41:38 np0005532761 podman[87810]: 2025-11-23 20:41:38.027839825 +0000 UTC m=+0.576745880 container remove fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3 (image=quay.io/ceph/ceph:v19, name=youthful_lehmann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:38 np0005532761 systemd[1]: libpod-conmon-fcb7f49710408da0df34fa02a373dd0cc107bb6a23906f77761c08e316f5cfc3.scope: Deactivated successfully.
Nov 23 15:41:38 np0005532761 python3[87888]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:38 np0005532761 podman[87889]: 2025-11-23 20:41:38.391558295 +0000 UTC m=+0.053080891 container create c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b (image=quay.io/ceph/ceph:v19, name=goofy_wescoff, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:38 np0005532761 systemd[1]: Started libpod-conmon-c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b.scope.
Nov 23 15:41:38 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe279aab4c9ba88c268d0777a53380addcff47cf879420dae717b3ebb8fa819/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe279aab4c9ba88c268d0777a53380addcff47cf879420dae717b3ebb8fa819/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe279aab4c9ba88c268d0777a53380addcff47cf879420dae717b3ebb8fa819/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:38 np0005532761 podman[87889]: 2025-11-23 20:41:38.367124146 +0000 UTC m=+0.028646772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:38 np0005532761 podman[87889]: 2025-11-23 20:41:38.466620969 +0000 UTC m=+0.128143615 container init c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b (image=quay.io/ceph/ceph:v19, name=goofy_wescoff, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:38 np0005532761 podman[87889]: 2025-11-23 20:41:38.473547973 +0000 UTC m=+0.135070579 container start c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b (image=quay.io/ceph/ceph:v19, name=goofy_wescoff, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:38 np0005532761 podman[87889]: 2025-11-23 20:41:38.477859277 +0000 UTC m=+0.139381893 container attach c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b (image=quay.io/ceph/ceph:v19, name=goofy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 15:41:38 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 23 15:41:38 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 23 15:41:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Nov 23 15:41:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3810457862' entity='client.admin' 
Nov 23 15:41:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:38 np0005532761 systemd[1]: libpod-c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b.scope: Deactivated successfully.
Nov 23 15:41:38 np0005532761 podman[87889]: 2025-11-23 20:41:38.839219005 +0000 UTC m=+0.500741601 container died c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b (image=quay.io/ceph/ceph:v19, name=goofy_wescoff, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:38 np0005532761 systemd[1]: var-lib-containers-storage-overlay-6fe279aab4c9ba88c268d0777a53380addcff47cf879420dae717b3ebb8fa819-merged.mount: Deactivated successfully.
Nov 23 15:41:38 np0005532761 podman[87889]: 2025-11-23 20:41:38.942128149 +0000 UTC m=+0.603650745 container remove c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b (image=quay.io/ceph/ceph:v19, name=goofy_wescoff, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:38 np0005532761 systemd[1]: libpod-conmon-c947e9f0b78d5269565e4190addc8469335426f50178adb13441ccbaaae0768b.scope: Deactivated successfully.
Nov 23 15:41:38 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/209025710' entity='client.admin' 
Nov 23 15:41:38 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/3810457862' entity='client.admin' 
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v94: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:39 np0005532761 python3[87977]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:39 np0005532761 podman[87990]: 2025-11-23 20:41:39.340096889 +0000 UTC m=+0.052172626 container create 0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf (image=quay.io/ceph/ceph:v19, name=xenodochial_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:41:39 np0005532761 systemd[1]: Started libpod-conmon-0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf.scope.
Nov 23 15:41:39 np0005532761 podman[87990]: 2025-11-23 20:41:39.315422383 +0000 UTC m=+0.027498160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d01b30d199a084dd5232f61385a4cb66dec9db57c6d854bff172f18fd1c7440/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d01b30d199a084dd5232f61385a4cb66dec9db57c6d854bff172f18fd1c7440/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d01b30d199a084dd5232f61385a4cb66dec9db57c6d854bff172f18fd1c7440/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:39 np0005532761 podman[87990]: 2025-11-23 20:41:39.434921588 +0000 UTC m=+0.146997345 container init 0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf (image=quay.io/ceph/ceph:v19, name=xenodochial_cannon, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:39 np0005532761 podman[87990]: 2025-11-23 20:41:39.440801554 +0000 UTC m=+0.152877271 container start 0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf (image=quay.io/ceph/ceph:v19, name=xenodochial_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:41:39 np0005532761 podman[87990]: 2025-11-23 20:41:39.445023686 +0000 UTC m=+0.157099413 container attach 0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf (image=quay.io/ceph/ceph:v19, name=xenodochial_cannon, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 23 15:41:39 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 23 15:41:39 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1043924838' entity='client.admin' 
Nov 23 15:41:39 np0005532761 systemd[1]: libpod-0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf.scope: Deactivated successfully.
Nov 23 15:41:39 np0005532761 podman[87990]: 2025-11-23 20:41:39.814943882 +0000 UTC m=+0.527019619 container died 0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf (image=quay.io/ceph/ceph:v19, name=xenodochial_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:41:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1d01b30d199a084dd5232f61385a4cb66dec9db57c6d854bff172f18fd1c7440-merged.mount: Deactivated successfully.
Nov 23 15:41:39 np0005532761 podman[87990]: 2025-11-23 20:41:39.859249439 +0000 UTC m=+0.571325166 container remove 0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf (image=quay.io/ceph/ceph:v19, name=xenodochial_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 15:41:39 np0005532761 systemd[1]: libpod-conmon-0955bcd1ba800eaaeaa61d11eb2fa055b7091ac752949f4df0d8442957e73cbf.scope: Deactivated successfully.
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: from='osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 23 15:41:39 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1043924838' entity='client.admin' 
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:40 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e31 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532761 python3[88147]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:40 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 23 15:41:40 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532761 python3[88186]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.oyehye/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:40 np0005532761 podman[88187]: 2025-11-23 20:41:40.972902758 +0000 UTC m=+0.033889011 container create d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48 (image=quay.io/ceph/ceph:v19, name=relaxed_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: from='osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:41 np0005532761 systemd[1]: Started libpod-conmon-d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48.scope.
Nov 23 15:41:41 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2636400cdb455dfe4db15807581a0fb681b21c14a03ba1b66ae59ca43ede982e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2636400cdb455dfe4db15807581a0fb681b21c14a03ba1b66ae59ca43ede982e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2636400cdb455dfe4db15807581a0fb681b21c14a03ba1b66ae59ca43ede982e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:41 np0005532761 podman[88187]: 2025-11-23 20:41:41.030082827 +0000 UTC m=+0.091069080 container init d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48 (image=quay.io/ceph/ceph:v19, name=relaxed_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:41 np0005532761 podman[88187]: 2025-11-23 20:41:41.040900045 +0000 UTC m=+0.101886298 container start d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48 (image=quay.io/ceph/ceph:v19, name=relaxed_bose, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:41 np0005532761 podman[88187]: 2025-11-23 20:41:41.045034065 +0000 UTC m=+0.106020338 container attach d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48 (image=quay.io/ceph/ceph:v19, name=relaxed_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:41 np0005532761 podman[88187]: 2025-11-23 20:41:40.958189598 +0000 UTC m=+0.019175851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 23 15:41:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v96: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:41 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:41 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/530987644; not ready for session (expect reconnect)
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:41 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.oyehye/server_addr}] v 0)
Nov 23 15:41:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1122996363' entity='client.admin' 
Nov 23 15:41:41 np0005532761 systemd[1]: libpod-d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48.scope: Deactivated successfully.
Nov 23 15:41:41 np0005532761 podman[88187]: 2025-11-23 20:41:41.410659876 +0000 UTC m=+0.471646129 container died d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48 (image=quay.io/ceph/ceph:v19, name=relaxed_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2636400cdb455dfe4db15807581a0fb681b21c14a03ba1b66ae59ca43ede982e-merged.mount: Deactivated successfully.
Nov 23 15:41:41 np0005532761 podman[88187]: 2025-11-23 20:41:41.462098431 +0000 UTC m=+0.523084714 container remove d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48 (image=quay.io/ceph/ceph:v19, name=relaxed_bose, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:41 np0005532761 systemd[1]: libpod-conmon-d609cd16c037fb6ad24ea302bd1a934b0c5aff34ef9b00a822f4a44fc22c1d48.scope: Deactivated successfully.
Nov 23 15:41:41 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Nov 23 15:41:41 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Nov 23 15:41:42 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/530987644; not ready for session (expect reconnect)
Nov 23 15:41:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:42 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:42 np0005532761 ceph-mon[74569]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 23 15:41:42 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1122996363' entity='client.admin' 
Nov 23 15:41:42 np0005532761 python3[88264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.kgyerp/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:42 np0005532761 podman[88265]: 2025-11-23 20:41:42.455926159 +0000 UTC m=+0.045676624 container create 7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071 (image=quay.io/ceph/ceph:v19, name=jolly_hellman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:41:42 np0005532761 systemd[1]: Started libpod-conmon-7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071.scope.
Nov 23 15:41:42 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a413ae36f426c2e0f8f656f828e3c36602692b3fa866b000b4e3c1126ce89f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a413ae36f426c2e0f8f656f828e3c36602692b3fa866b000b4e3c1126ce89f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a413ae36f426c2e0f8f656f828e3c36602692b3fa866b000b4e3c1126ce89f1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:42 np0005532761 podman[88265]: 2025-11-23 20:41:42.434146541 +0000 UTC m=+0.023897116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:42 np0005532761 podman[88265]: 2025-11-23 20:41:42.531502166 +0000 UTC m=+0.121252641 container init 7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071 (image=quay.io/ceph/ceph:v19, name=jolly_hellman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:42 np0005532761 podman[88265]: 2025-11-23 20:41:42.537461214 +0000 UTC m=+0.127211679 container start 7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071 (image=quay.io/ceph/ceph:v19, name=jolly_hellman, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 23 15:41:42 np0005532761 podman[88265]: 2025-11-23 20:41:42.54142981 +0000 UTC m=+0.131180285 container attach 7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071 (image=quay.io/ceph/ceph:v19, name=jolly_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 23 15:41:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.kgyerp/server_addr}] v 0)
Nov 23 15:41:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1003409241' entity='client.admin' 
Nov 23 15:41:42 np0005532761 systemd[1]: libpod-7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071.scope: Deactivated successfully.
Nov 23 15:41:42 np0005532761 podman[88265]: 2025-11-23 20:41:42.91717257 +0000 UTC m=+0.506923045 container died 7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071 (image=quay.io/ceph/ceph:v19, name=jolly_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045608521s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 78.984413147s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664437294s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 81.603248596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045608521s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984413147s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664437294s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.603248596s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068432808s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.007392883s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068432808s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.007392883s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664050102s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 81.603042603s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664050102s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.603042603s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045347214s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 78.984436035s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045347214s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984436035s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.665056229s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 81.604354858s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045228004s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 78.984527588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045228004s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984527588s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.665056229s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604354858s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045114517s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 78.984443665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045114517s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984443665s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664931297s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 81.604347229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664931297s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604347229s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045026779s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 78.984542847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.045026779s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984542847s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.044945717s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 78.984542847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.044945717s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984542847s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664735794s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 81.604537964s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=22/23 n=0 ec=13/13 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068808556s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.008628845s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664735794s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604537964s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=22/23 n=0 ec=13/13 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068808556s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.008628845s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=24/26 n=0 ec=16/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664821625s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 81.604728699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=24/26 n=0 ec=16/16 lis/c=24/24 les/c/f=26/26/0 sis=32 pruub=14.664821625s) [] r=-1 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604728699s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.069145203s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.009162903s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068885803s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.008903503s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.069145203s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009162903s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068885803s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.008903503s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068996429s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.009094238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068996429s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009094238s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068918228s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.009094238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068918228s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009094238s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.073391914s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.013618469s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.044548035s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 78.984756470s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.073391914s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.013618469s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=12.044548035s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984756470s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068987846s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.009284973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068987846s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009284973s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068887711s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.009262085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.073388100s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 79.013771057s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.068887711s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009262085s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.073388100s) [] r=-1 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.013771057s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:42 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2a413ae36f426c2e0f8f656f828e3c36602692b3fa866b000b4e3c1126ce89f1-merged.mount: Deactivated successfully.
Nov 23 15:41:42 np0005532761 podman[88265]: 2025-11-23 20:41:42.980580444 +0000 UTC m=+0.570330909 container remove 7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071 (image=quay.io/ceph/ceph:v19, name=jolly_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:42 np0005532761 systemd[1]: libpod-conmon-7a59a4be2093e8c931aa3ff85434bf6b18a0d2355a49c3e9e7f2a3e9294c7071.scope: Deactivated successfully.
Nov 23 15:41:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v98: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:43 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/530987644; not ready for session (expect reconnect)
Nov 23 15:41:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:43 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:43 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1003409241' entity='client.admin' 
Nov 23 15:41:43 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 23 15:41:43 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 23 15:41:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:43 np0005532761 python3[88343]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.jtkauz/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:43 np0005532761 podman[88346]: 2025-11-23 20:41:43.972688046 +0000 UTC m=+0.044190315 container create a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73 (image=quay.io/ceph/ceph:v19, name=magical_williamson, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:44 np0005532761 systemd[1]: Started libpod-conmon-a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73.scope.
Nov 23 15:41:44 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f350ac9b0735e7b30a0d7385112ba577c88304cc554925999610c4d5fd9c6837/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f350ac9b0735e7b30a0d7385112ba577c88304cc554925999610c4d5fd9c6837/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f350ac9b0735e7b30a0d7385112ba577c88304cc554925999610c4d5fd9c6837/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:44 np0005532761 podman[88346]: 2025-11-23 20:41:43.950301642 +0000 UTC m=+0.021803941 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:44 np0005532761 podman[88346]: 2025-11-23 20:41:44.119631198 +0000 UTC m=+0.191133497 container init a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73 (image=quay.io/ceph/ceph:v19, name=magical_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:44 np0005532761 podman[88346]: 2025-11-23 20:41:44.125672459 +0000 UTC m=+0.197174738 container start a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73 (image=quay.io/ceph/ceph:v19, name=magical_williamson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 15:41:44 np0005532761 podman[88346]: 2025-11-23 20:41:44.157242177 +0000 UTC m=+0.228744486 container attach a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73 (image=quay.io/ceph/ceph:v19, name=magical_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/530987644; not ready for session (expect reconnect)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.jtkauz/server_addr}] v 0)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1887137413' entity='client.admin' 
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:44 np0005532761 systemd[1]: libpod-a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73.scope: Deactivated successfully.
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:44 np0005532761 podman[88346]: 2025-11-23 20:41:44.560980232 +0000 UTC m=+0.632482511 container died a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73 (image=quay.io/ceph/ceph:v19, name=magical_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:41:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:44 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f350ac9b0735e7b30a0d7385112ba577c88304cc554925999610c4d5fd9c6837-merged.mount: Deactivated successfully.
Nov 23 15:41:44 np0005532761 podman[88346]: 2025-11-23 20:41:44.601638042 +0000 UTC m=+0.673140321 container remove a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73 (image=quay.io/ceph/ceph:v19, name=magical_williamson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 15:41:44 np0005532761 systemd[1]: libpod-conmon-a792854b30bf1b862c6ad09b0befce9a44e5e3543048e6d01248b10d78f2ab73.scope: Deactivated successfully.
Nov 23 15:41:44 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 23 15:41:44 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 23 15:41:44 np0005532761 python3[88500]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:44 np0005532761 podman[88574]: 2025-11-23 20:41:44.952289885 +0000 UTC m=+0.043435015 container create a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7 (image=quay.io/ceph/ceph:v19, name=busy_dirac, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 15:41:44 np0005532761 systemd[1]: Started libpod-conmon-a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7.scope.
Nov 23 15:41:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07dd1bfefe9f198c9365c7525cd83020800531dd1b4e6ecc20f1b08889962fa7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07dd1bfefe9f198c9365c7525cd83020800531dd1b4e6ecc20f1b08889962fa7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07dd1bfefe9f198c9365c7525cd83020800531dd1b4e6ecc20f1b08889962fa7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:45 np0005532761 podman[88574]: 2025-11-23 20:41:45.024795701 +0000 UTC m=+0.115940841 container init a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7 (image=quay.io/ceph/ceph:v19, name=busy_dirac, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:45 np0005532761 podman[88574]: 2025-11-23 20:41:44.934638847 +0000 UTC m=+0.025783997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:45 np0005532761 podman[88574]: 2025-11-23 20:41:45.031295804 +0000 UTC m=+0.122440934 container start a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7 (image=quay.io/ceph/ceph:v19, name=busy_dirac, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:41:45 np0005532761 podman[88574]: 2025-11-23 20:41:45.035557307 +0000 UTC m=+0.126702437 container attach a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7 (image=quay.io/ceph/ceph:v19, name=busy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v99: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/530987644; not ready for session (expect reconnect)
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1887137413' entity='client.admin' 
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644] boot
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 15:41:45 np0005532761 busy_dirac[88615]: module 'dashboard' is already disabled
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:41:45 np0005532761 systemd[1]: libpod-a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7.scope: Deactivated successfully.
Nov 23 15:41:45 np0005532761 podman[88574]: 2025-11-23 20:41:45.617329599 +0000 UTC m=+0.708474729 container died a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7 (image=quay.io/ceph/ceph:v19, name=busy_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-07dd1bfefe9f198c9365c7525cd83020800531dd1b4e6ecc20f1b08889962fa7-merged.mount: Deactivated successfully.
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:41:45 np0005532761 podman[88574]: 2025-11-23 20:41:45.666884296 +0000 UTC m=+0.758029416 container remove a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7 (image=quay.io/ceph/ceph:v19, name=busy_dirac, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:41:45 np0005532761 systemd[1]: libpod-conmon-a3fe4dd0496e980dd867b2463d7f29f86b3c32ae70fc0636cc5e93f5c6b21ab7.scope: Deactivated successfully.
Nov 23 15:41:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Nov 23 15:41:45 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Nov 23 15:41:45 np0005532761 python3[88949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:46 np0005532761 podman[88950]: 2025-11-23 20:41:46.037858348 +0000 UTC m=+0.046533066 container create df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2 (image=quay.io/ceph/ceph:v19, name=angry_nobel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:46 np0005532761 systemd[1]: Started libpod-conmon-df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2.scope.
Nov 23 15:41:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d787b523690bfffeba16f3980225fff37e254833ba53d5784e12d2897c93fa86/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d787b523690bfffeba16f3980225fff37e254833ba53d5784e12d2897c93fa86/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d787b523690bfffeba16f3980225fff37e254833ba53d5784e12d2897c93fa86/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:46 np0005532761 podman[88950]: 2025-11-23 20:41:46.106872852 +0000 UTC m=+0.115547580 container init df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2 (image=quay.io/ceph/ceph:v19, name=angry_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:46 np0005532761 podman[88950]: 2025-11-23 20:41:46.112880221 +0000 UTC m=+0.121554929 container start df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2 (image=quay.io/ceph/ceph:v19, name=angry_nobel, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Nov 23 15:41:46 np0005532761 podman[88950]: 2025-11-23 20:41:46.021147975 +0000 UTC m=+0.029822723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:46 np0005532761 podman[88950]: 2025-11-23 20:41:46.116251361 +0000 UTC m=+0.124926089 container attach df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2 (image=quay.io/ceph/ceph:v19, name=angry_nobel, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: OSD bench result of 9936.100737 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644] boot
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 23 15:41:46 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 23 15:41:46 np0005532761 podman[89078]: 2025-11-23 20:41:46.710244508 +0000 UTC m=+0.037191219 container create a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:46 np0005532761 systemd[1]: Started libpod-conmon-a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51.scope.
Nov 23 15:41:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:46 np0005532761 podman[89078]: 2025-11-23 20:41:46.788303871 +0000 UTC m=+0.115250602 container init a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:41:46 np0005532761 podman[89078]: 2025-11-23 20:41:46.695010584 +0000 UTC m=+0.021957305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:46 np0005532761 podman[89078]: 2025-11-23 20:41:46.793342735 +0000 UTC m=+0.120289446 container start a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:41:46 np0005532761 brave_blackwell[89094]: 167 167
Nov 23 15:41:46 np0005532761 systemd[1]: libpod-a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51.scope: Deactivated successfully.
Nov 23 15:41:46 np0005532761 podman[89078]: 2025-11-23 20:41:46.798099391 +0000 UTC m=+0.125046132 container attach a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:46 np0005532761 podman[89078]: 2025-11-23 20:41:46.798535373 +0000 UTC m=+0.125482084 container died a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 15:41:46 np0005532761 systemd[1]: var-lib-containers-storage-overlay-025c7e0da04a3bf2a85024b11d6b23a7f8a40440b4e934e55ada5e41447a607b-merged.mount: Deactivated successfully.
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.148414612s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984413147s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.148382187s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984413147s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[5.12( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.766915321s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.603042603s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[5.13( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.766942024s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.603248596s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765021324s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.603042603s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765212059s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.603248596s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[4.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169219971s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.007392883s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169190407s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.007392883s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.146195412s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984443665s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[2.13( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.146164894s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984443665s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.146062851s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984436035s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[5.8( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765922546s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604354858s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.146073341s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984527588s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[5.8( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765907288s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604354858s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[2.c( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.146027565s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984527588s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[2.10( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.145865440s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984436035s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.145811081s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984542847s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.145788193s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984542847s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[5.b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765565872s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604347229s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[5.b( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765546799s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604347229s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.145641327s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984542847s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[5.d( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765602112s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604537964s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.145616531s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984542847s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[5.d( empty local-lis/les=24/26 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765583992s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604537964s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=22/23 n=0 ec=13/13 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169599533s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.008628845s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=22/23 n=0 ec=13/13 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169563293s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.008628845s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=24/26 n=0 ec=16/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765573502s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604728699s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=24/26 n=0 ec=16/16 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=10.765541077s) [2] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.604728699s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[4.2( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169913292s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009162903s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169892311s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009162903s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[4.6( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169622421s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.008903503s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[4.6( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169596672s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.008903503s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[4.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169653893s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009094238s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[4.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169633865s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009094238s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169599533s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009094238s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.145232201s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984756470s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169577599s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009094238s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=29/30 n=0 ec=20/12 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=8.145202637s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.984756470s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[4.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169665337s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009284973s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169619560s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009262085s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[4.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169633865s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009284973s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[4.19( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.174061775s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.013771057s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.169603348s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.009262085s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[4.19( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.174044609s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.013771057s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 33 pg[4.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.173863411s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.013618469s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:46 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 34 pg[4.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.173838615s) [2] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.013618469s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532761 podman[89078]: 2025-11-23 20:41:46.840081186 +0000 UTC m=+0.167027897 container remove a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:46 np0005532761 systemd[1]: libpod-conmon-a27bdd79a671e43482b299a3df57d90f7e7903c3204935e3f9c78aa48b0f5b51.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 podman[89117]: 2025-11-23 20:41:47.007645937 +0000 UTC m=+0.039204362 container create e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:41:47 np0005532761 systemd[1]: Started libpod-conmon-e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca.scope.
Nov 23 15:41:47 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0888330712e0192fa80186ef5ebdb3f6d8d46f491f75b5bec8b9231553e37ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0888330712e0192fa80186ef5ebdb3f6d8d46f491f75b5bec8b9231553e37ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0888330712e0192fa80186ef5ebdb3f6d8d46f491f75b5bec8b9231553e37ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0888330712e0192fa80186ef5ebdb3f6d8d46f491f75b5bec8b9231553e37ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0888330712e0192fa80186ef5ebdb3f6d8d46f491f75b5bec8b9231553e37ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 podman[89117]: 2025-11-23 20:41:47.069527211 +0000 UTC m=+0.101085636 container init e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wescoff, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:47 np0005532761 podman[89117]: 2025-11-23 20:41:47.079443884 +0000 UTC m=+0.111002309 container start e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:47 np0005532761 podman[89117]: 2025-11-23 20:41:47.082725212 +0000 UTC m=+0.114283637 container attach e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:41:47 np0005532761 podman[89117]: 2025-11-23 20:41:46.989324251 +0000 UTC m=+0.020882706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v102: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:41:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  1: '-n'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  2: 'mgr.compute-0.oyehye'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  3: '-f'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  4: '--setuser'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  5: 'ceph'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  6: '--setgroup'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  7: 'ceph'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr respawn  exe_path /proc/self/exe
Nov 23 15:41:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:41:47 np0005532761 systemd[1]: libpod-df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 podman[88950]: 2025-11-23 20:41:47.275207554 +0000 UTC m=+1.283882272 container died df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2 (image=quay.io/ceph/ceph:v19, name=angry_nobel, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 15:41:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d787b523690bfffeba16f3980225fff37e254833ba53d5784e12d2897c93fa86-merged.mount: Deactivated successfully.
Nov 23 15:41:47 np0005532761 podman[88950]: 2025-11-23 20:41:47.325224453 +0000 UTC m=+1.333899171 container remove df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2 (image=quay.io/ceph/ceph:v19, name=angry_nobel, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:41:47 np0005532761 systemd[1]: session-25.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd[1]: session-23.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 25 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 23 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 33 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 25.
Nov 23 15:41:47 np0005532761 systemd[1]: session-32.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 32 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd[1]: session-30.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 30 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd[1]: libpod-conmon-df1c6b0299419c28d2be9bf7613f19dc0c2bdedade715175119ef8ed551725e2.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd[1]: session-27.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 23.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 27 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd[1]: session-26.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 26 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd[1]: session-21.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd[1]: session-24.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 21 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 24 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd[1]: session-28.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd[1]: session-29.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd[1]: session-31.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 28 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 29 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Session 31 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 32.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 30.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 27.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 26.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 21.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 24.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 28.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 29.
Nov 23 15:41:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setuser ceph since I am not root
Nov 23 15:41:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setgroup ceph since I am not root
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 31.
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: pidfile_write: ignore empty --pid-file
Nov 23 15:41:47 np0005532761 stupefied_wescoff[89134]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:41:47 np0005532761 stupefied_wescoff[89134]: --> All data devices are unavailable
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'alerts'
Nov 23 15:41:47 np0005532761 systemd[1]: libpod-e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 podman[89117]: 2025-11-23 20:41:47.417713529 +0000 UTC m=+0.449271954 container died e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wescoff, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c0888330712e0192fa80186ef5ebdb3f6d8d46f491f75b5bec8b9231553e37ba-merged.mount: Deactivated successfully.
Nov 23 15:41:47 np0005532761 podman[89117]: 2025-11-23 20:41:47.486727642 +0000 UTC m=+0.518286067 container remove e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wescoff, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 23 15:41:47 np0005532761 systemd[1]: libpod-conmon-e24ca96ffbdca4021a72a4ee3a272c36d49877ad31bb0a19952c6718116f58ca.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'balancer'
Nov 23 15:41:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:47.520+0000 7f9214656140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532761 systemd[1]: session-33.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532761 systemd[1]: session-33.scope: Consumed 22.294s CPU time.
Nov 23 15:41:47 np0005532761 systemd-logind[820]: Removed session 33.
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'cephadm'
Nov 23 15:41:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:47.605+0000 7f9214656140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 15:41:47 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Nov 23 15:41:47 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Nov 23 15:41:47 np0005532761 python3[89221]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:47 np0005532761 podman[89222]: 2025-11-23 20:41:47.773849338 +0000 UTC m=+0.040417774 container create e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114 (image=quay.io/ceph/ceph:v19, name=hungry_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:41:47 np0005532761 systemd[1]: Started libpod-conmon-e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114.scope.
Nov 23 15:41:47 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa19fcbef12c78438d5e205ed22c78a9c0565b56f82a60ae8dc96b72dea41c5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa19fcbef12c78438d5e205ed22c78a9c0565b56f82a60ae8dc96b72dea41c5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa19fcbef12c78438d5e205ed22c78a9c0565b56f82a60ae8dc96b72dea41c5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:47 np0005532761 podman[89222]: 2025-11-23 20:41:47.754740441 +0000 UTC m=+0.021308887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:47 np0005532761 podman[89222]: 2025-11-23 20:41:47.851649545 +0000 UTC m=+0.118218021 container init e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114 (image=quay.io/ceph/ceph:v19, name=hungry_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:41:47 np0005532761 podman[89222]: 2025-11-23 20:41:47.859112963 +0000 UTC m=+0.125681399 container start e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114 (image=quay.io/ceph/ceph:v19, name=hungry_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:41:47 np0005532761 podman[89222]: 2025-11-23 20:41:47.862828602 +0000 UTC m=+0.129397048 container attach e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114 (image=quay.io/ceph/ceph:v19, name=hungry_williams, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 15:41:48 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'crash'
Nov 23 15:41:48 np0005532761 ceph-mgr[74869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:41:48 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'dashboard'
Nov 23 15:41:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:48.382+0000 7f9214656140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:41:48 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 23 15:41:48 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 23 15:41:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:48 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:49.005+0000 7f9214656140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:41:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:41:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:41:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  from numpy import show_config as show_numpy_config
Nov 23 15:41:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:49.176+0000 7f9214656140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'influx'
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:49.245+0000 7f9214656140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'insights'
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'iostat'
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:49.385+0000 7f9214656140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:41:49 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Nov 23 15:41:49 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'localpool'
Nov 23 15:41:49 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mirroring'
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'nfs'
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:41:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:50.426+0000 7f9214656140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:41:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:50.639+0000 7f9214656140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 23 15:41:50 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:50.713+0000 7f9214656140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_support'
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:41:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:50.780+0000 7f9214656140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'progress'
Nov 23 15:41:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:50.868+0000 7f9214656140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'prometheus'
Nov 23 15:41:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:50.939+0000 7f9214656140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532761 ceph-mgr[74869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:41:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:51.288+0000 7f9214656140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532761 ceph-mgr[74869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'restful'
Nov 23 15:41:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:51.384+0000 7f9214656140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rgw'
Nov 23 15:41:51 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 23 15:41:51 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 23 15:41:51 np0005532761 ceph-mgr[74869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rook'
Nov 23 15:41:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:51.812+0000 7f9214656140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:52.409+0000 7f9214656140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'selftest'
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:41:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:52.479+0000 7f9214656140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:52.561+0000 7f9214656140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'stats'
Nov 23 15:41:52 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 23 15:41:52 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'status'
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telegraf'
Nov 23 15:41:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:52.711+0000 7f9214656140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telemetry'
Nov 23 15:41:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:52.789+0000 7f9214656140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:41:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:52.938+0000 7f9214656140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:53.159+0000 7f9214656140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'volumes'
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz restarted
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz started
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'zabbix'
Nov 23 15:41:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:53.420+0000 7f9214656140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:41:53.495+0000 7f9214656140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Active manager daemon compute-0.oyehye restarted
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.oyehye
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: ms_deliver_dispatch: unhandled message 0x55595b5d7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 23 15:41:53 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 23 15:41:53 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map Activating!
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp restarted
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp started
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.oyehye(active, starting, since 0.252391s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map I am now activating
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: Activating manager daemon compute-0.oyehye
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e1 all = 1
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: balancer
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [balancer INFO root] Starting
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Manager daemon compute-0.oyehye is now available
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:41:53
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: cephadm
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: crash
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: dashboard
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [dashboard INFO sso] Loading SSO DB version=1
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: devicehealth
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Starting
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: iostat
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: nfs
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: orchestrator
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: pg_autoscaler
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: progress
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [progress INFO root] Loading...
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f91929bc1f0>, <progress.module.GhostEvent object at 0x7f91929bc1c0>, <progress.module.GhostEvent object at 0x7f91929bc220>, <progress.module.GhostEvent object at 0x7f91929bc2b0>, <progress.module.GhostEvent object at 0x7f91929bc2e0>, <progress.module.GhostEvent object at 0x7f91929bc310>, <progress.module.GhostEvent object at 0x7f91929bc340>, <progress.module.GhostEvent object at 0x7f91929bc370>, <progress.module.GhostEvent object at 0x7f91929bc3a0>, <progress.module.GhostEvent object at 0x7f91929bc3d0>] historic events
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded OSDMap, ready.
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] recovery thread starting
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] starting setup
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: rbd_support
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: restful
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [restful INFO root] server_addr: :: server_port: 8003
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: status
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [restful WARNING root] server not running: no certificate configured
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: telemetry
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] PerfHandler: starting
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TaskHandler: starting
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"} v 0)
Nov 23 15:41:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] setup complete
Nov 23 15:41:53 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: volumes
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 23 15:41:54 np0005532761 systemd-logind[820]: New session 34 of user ceph-admin.
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 23 15:41:54 np0005532761 systemd[1]: Started Session 34 of User ceph-admin.
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.module] Engine started.
Nov 23 15:41:54 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 23 15:41:54 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14343 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:41:54 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.oyehye(active, since 1.2741s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:41:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Nov 23 15:41:54 np0005532761 ceph-mon[74569]: Manager daemon compute-0.oyehye is now available
Nov 23 15:41:54 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:41:54 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:41:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:41:54 np0005532761 podman[89527]: 2025-11-23 20:41:54.799865146 +0000 UTC m=+0.061111014 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:54 np0005532761 hungry_williams[89237]: Option GRAFANA_API_USERNAME updated
Nov 23 15:41:54 np0005532761 systemd[1]: libpod-e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114.scope: Deactivated successfully.
Nov 23 15:41:54 np0005532761 podman[89222]: 2025-11-23 20:41:54.827523231 +0000 UTC m=+7.094091657 container died e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114 (image=quay.io/ceph/ceph:v19, name=hungry_williams, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:41:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-aa19fcbef12c78438d5e205ed22c78a9c0565b56f82a60ae8dc96b72dea41c5b-merged.mount: Deactivated successfully.
Nov 23 15:41:54 np0005532761 podman[89222]: 2025-11-23 20:41:54.863289881 +0000 UTC m=+7.129858307 container remove e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114 (image=quay.io/ceph/ceph:v19, name=hungry_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:54 np0005532761 systemd[1]: libpod-conmon-e3d03704734866859c6f7a2d731b3ed53c2f89add21b09fe5ac30b3b20de6114.scope: Deactivated successfully.
Nov 23 15:41:54 np0005532761 podman[89527]: 2025-11-23 20:41:54.895217079 +0000 UTC m=+0.156462947 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:41:55 np0005532761 python3[89618]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 podman[89649]: 2025-11-23 20:41:55.245317818 +0000 UTC m=+0.046313702 container create 5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e (image=quay.io/ceph/ceph:v19, name=recursing_hamilton, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:41:55 np0005532761 systemd[1]: Started libpod-conmon-5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e.scope.
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c83beca63da213178a251883ce170b3523bc5eee6b6de7e4b2a17b10b797cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c83beca63da213178a251883ce170b3523bc5eee6b6de7e4b2a17b10b797cb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c83beca63da213178a251883ce170b3523bc5eee6b6de7e4b2a17b10b797cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:55 np0005532761 podman[89649]: 2025-11-23 20:41:55.226328623 +0000 UTC m=+0.027324517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:55 np0005532761 podman[89649]: 2025-11-23 20:41:55.329673078 +0000 UTC m=+0.130668982 container init 5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e (image=quay.io/ceph/ceph:v19, name=recursing_hamilton, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 23 15:41:55 np0005532761 podman[89649]: 2025-11-23 20:41:55.343761162 +0000 UTC m=+0.144757067 container start 5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e (image=quay.io/ceph/ceph:v19, name=recursing_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 15:41:55 np0005532761 podman[89649]: 2025-11-23 20:41:55.348478187 +0000 UTC m=+0.149474091 container attach 5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e (image=quay.io/ceph/ceph:v19, name=recursing_hamilton, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:55 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 23 15:41:55 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14364 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 recursing_hamilton[89696]: Option GRAFANA_API_PASSWORD updated
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:41:55] ENGINE Bus STARTING
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:41:55] ENGINE Bus STARTING
Nov 23 15:41:55 np0005532761 systemd[1]: libpod-5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e.scope: Deactivated successfully.
Nov 23 15:41:55 np0005532761 podman[89649]: 2025-11-23 20:41:55.719877182 +0000 UTC m=+0.520873086 container died 5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e (image=quay.io/ceph/ceph:v19, name=recursing_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:41:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-71c83beca63da213178a251883ce170b3523bc5eee6b6de7e4b2a17b10b797cb-merged.mount: Deactivated successfully.
Nov 23 15:41:55 np0005532761 podman[89649]: 2025-11-23 20:41:55.76271819 +0000 UTC m=+0.563714074 container remove 5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e (image=quay.io/ceph/ceph:v19, name=recursing_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v4: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:41:55 np0005532761 systemd[1]: libpod-conmon-5c97f62482d6f526e82b2573dbc7855822a3a2a667d964aa851a770e5d1a914e.scope: Deactivated successfully.
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:41:55] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:41:55] ENGINE Client ('192.168.122.100', 34418) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:41:55] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:41:55] ENGINE Client ('192.168.122.100', 34418) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:41:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:41:55] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:41:55] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:41:55] ENGINE Bus STARTED
Nov 23 15:41:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:41:55] ENGINE Bus STARTED
Nov 23 15:41:56 np0005532761 python3[89889]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:56 np0005532761 podman[89890]: 2025-11-23 20:41:56.194650953 +0000 UTC m=+0.059373678 container create cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c (image=quay.io/ceph/ceph:v19, name=silly_curran, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:41:56 np0005532761 systemd[1]: Started libpod-conmon-cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c.scope.
Nov 23 15:41:56 np0005532761 podman[89890]: 2025-11-23 20:41:56.167412029 +0000 UTC m=+0.032134794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fd5c4385824699a74c4da266c96c3e1a377bf7ea88390eddeb2876f8df0550/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fd5c4385824699a74c4da266c96c3e1a377bf7ea88390eddeb2876f8df0550/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fd5c4385824699a74c4da266c96c3e1a377bf7ea88390eddeb2876f8df0550/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:41:56 np0005532761 podman[89890]: 2025-11-23 20:41:56.292986145 +0000 UTC m=+0.157708910 container init cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c (image=quay.io/ceph/ceph:v19, name=silly_curran, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:56 np0005532761 podman[89890]: 2025-11-23 20:41:56.300939606 +0000 UTC m=+0.165662321 container start cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c (image=quay.io/ceph/ceph:v19, name=silly_curran, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:41:56 np0005532761 podman[89890]: 2025-11-23 20:41:56.305675352 +0000 UTC m=+0.170398097 container attach cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c (image=quay.io/ceph/ceph:v19, name=silly_curran, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.oyehye(active, since 2s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:56 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 23 15:41:56 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 23 15:41:56 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14376 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 silly_curran[89920]: Option ALERTMANAGER_API_HOST updated
Nov 23 15:41:56 np0005532761 systemd[1]: libpod-cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c.scope: Deactivated successfully.
Nov 23 15:41:56 np0005532761 podman[89890]: 2025-11-23 20:41:56.715214379 +0000 UTC m=+0.579937104 container died cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c (image=quay.io/ceph/ceph:v19, name=silly_curran, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Nov 23 15:41:56 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c5fd5c4385824699a74c4da266c96c3e1a377bf7ea88390eddeb2876f8df0550-merged.mount: Deactivated successfully.
Nov 23 15:41:56 np0005532761 podman[89890]: 2025-11-23 20:41:56.758693704 +0000 UTC m=+0.623416429 container remove cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c (image=quay.io/ceph/ceph:v19, name=silly_curran, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:41:56 np0005532761 systemd[1]: libpod-conmon-cd2d2798bc6839a01948f7ea82346a6722be10ab901e8d45d938168188fecf7c.scope: Deactivated successfully.
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:41:55] ENGINE Bus STARTING
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:41:55] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:41:55] ENGINE Client ('192.168.122.100', 34418) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:41:55] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:41:55] ENGINE Bus STARTED
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:56 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532761 python3[89984]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:57 np0005532761 podman[89985]: 2025-11-23 20:41:57.194391037 +0000 UTC m=+0.058057894 container create 711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b (image=quay.io/ceph/ceph:v19, name=nostalgic_margulis, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 15:41:57 np0005532761 systemd[1]: Started libpod-conmon-711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b.scope.
Nov 23 15:41:57 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:57 np0005532761 podman[89985]: 2025-11-23 20:41:57.165663473 +0000 UTC m=+0.029330410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69fd065755303b9d5f3ae44bfd296d0665fdc801b89916e3550d83a282047cb3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69fd065755303b9d5f3ae44bfd296d0665fdc801b89916e3550d83a282047cb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69fd065755303b9d5f3ae44bfd296d0665fdc801b89916e3550d83a282047cb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:41:57 np0005532761 podman[89985]: 2025-11-23 20:41:57.27997641 +0000 UTC m=+0.143643297 container init 711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b (image=quay.io/ceph/ceph:v19, name=nostalgic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:41:57 np0005532761 podman[89985]: 2025-11-23 20:41:57.291209768 +0000 UTC m=+0.154876655 container start 711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b (image=quay.io/ceph/ceph:v19, name=nostalgic_margulis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:41:57 np0005532761 podman[89985]: 2025-11-23 20:41:57.297601638 +0000 UTC m=+0.161268515 container attach 711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b (image=quay.io/ceph/ceph:v19, name=nostalgic_margulis, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 23 15:41:57 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.24161 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532761 nostalgic_margulis[90001]: Option PROMETHEUS_API_HOST updated
Nov 23 15:41:57 np0005532761 systemd[1]: libpod-711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b.scope: Deactivated successfully.
Nov 23 15:41:57 np0005532761 podman[89985]: 2025-11-23 20:41:57.703698974 +0000 UTC m=+0.567365841 container died 711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b (image=quay.io/ceph/ceph:v19, name=nostalgic_margulis, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 15:41:57 np0005532761 systemd[1]: var-lib-containers-storage-overlay-69fd065755303b9d5f3ae44bfd296d0665fdc801b89916e3550d83a282047cb3-merged.mount: Deactivated successfully.
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:41:57 np0005532761 podman[89985]: 2025-11-23 20:41:57.769272046 +0000 UTC m=+0.632938903 container remove 711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b (image=quay.io/ceph/ceph:v19, name=nostalgic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 15:41:57 np0005532761 systemd[1]: libpod-conmon-711b6d6bdff4b7c73afd4ce0f5214d64db7cbfa496e68e65b98e7e9a5e03d26b.scope: Deactivated successfully.
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532761 python3[90287]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532761 podman[90362]: 2025-11-23 20:41:58.14142209 +0000 UTC m=+0.055384762 container create 22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572 (image=quay.io/ceph/ceph:v19, name=beautiful_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:58 np0005532761 systemd[1]: Started libpod-conmon-22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572.scope.
Nov 23 15:41:58 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:58 np0005532761 podman[90362]: 2025-11-23 20:41:58.11092388 +0000 UTC m=+0.024886572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa252eeb6ba4cd273583103e058188447d6dee7e9e29ba5fd4e8f30b83eedfd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa252eeb6ba4cd273583103e058188447d6dee7e9e29ba5fd4e8f30b83eedfd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa252eeb6ba4cd273583103e058188447d6dee7e9e29ba5fd4e8f30b83eedfd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:58 np0005532761 podman[90362]: 2025-11-23 20:41:58.293042927 +0000 UTC m=+0.207005619 container init 22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572 (image=quay.io/ceph/ceph:v19, name=beautiful_albattani, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 23 15:41:58 np0005532761 podman[90362]: 2025-11-23 20:41:58.299970511 +0000 UTC m=+0.213933183 container start 22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572 (image=quay.io/ceph/ceph:v19, name=beautiful_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 15:41:58 np0005532761 podman[90362]: 2025-11-23 20:41:58.310660236 +0000 UTC m=+0.224622948 container attach 22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572 (image=quay.io/ceph/ceph:v19, name=beautiful_albattani, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.oyehye(active, since 4s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 23 15:41:58 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14388 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:58 np0005532761 beautiful_albattani[90423]: Option GRAFANA_API_URL updated
Nov 23 15:41:58 np0005532761 systemd[1]: libpod-22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572.scope: Deactivated successfully.
Nov 23 15:41:58 np0005532761 podman[90362]: 2025-11-23 20:41:58.722378961 +0000 UTC m=+0.636341633 container died 22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572 (image=quay.io/ceph/ceph:v19, name=beautiful_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:41:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0aa252eeb6ba4cd273583103e058188447d6dee7e9e29ba5fd4e8f30b83eedfd-merged.mount: Deactivated successfully.
Nov 23 15:41:58 np0005532761 podman[90362]: 2025-11-23 20:41:58.809118135 +0000 UTC m=+0.723080807 container remove 22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572 (image=quay.io/ceph/ceph:v19, name=beautiful_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:58 np0005532761 systemd[1]: libpod-conmon-22c2793c1473ac3295dcd3c945cfe801938e5574b43b835f5e78f5d9b1a95572.scope: Deactivated successfully.
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:58 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:59 np0005532761 python3[90822]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:59 np0005532761 podman[90889]: 2025-11-23 20:41:59.126874885 +0000 UTC m=+0.037058366 container create 61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666 (image=quay.io/ceph/ceph:v19, name=sleepy_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:59 np0005532761 systemd[1]: Started libpod-conmon-61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666.scope.
Nov 23 15:41:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:41:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988e2281a8b122d4523d08b2d8b5e79ccce2227af241ce1f998133af36401622/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988e2281a8b122d4523d08b2d8b5e79ccce2227af241ce1f998133af36401622/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988e2281a8b122d4523d08b2d8b5e79ccce2227af241ce1f998133af36401622/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:59 np0005532761 podman[90889]: 2025-11-23 20:41:59.112120193 +0000 UTC m=+0.022303694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:41:59 np0005532761 podman[90889]: 2025-11-23 20:41:59.220491632 +0000 UTC m=+0.130675133 container init 61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666 (image=quay.io/ceph/ceph:v19, name=sleepy_brahmagupta, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 23 15:41:59 np0005532761 podman[90889]: 2025-11-23 20:41:59.225958427 +0000 UTC m=+0.136141908 container start 61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666 (image=quay.io/ceph/ceph:v19, name=sleepy_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 15:41:59 np0005532761 podman[90889]: 2025-11-23 20:41:59.229188153 +0000 UTC m=+0.139371634 container attach 61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666 (image=quay.io/ceph/ceph:v19, name=sleepy_brahmagupta, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 15:41:59 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:59 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:59 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 23 15:41:59 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v6: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  1: '-n'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  2: 'mgr.compute-0.oyehye'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  3: '-f'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  4: '--setuser'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  5: 'ceph'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  6: '--setgroup'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  7: 'ceph'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr respawn  exe_path /proc/self/exe
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.oyehye(active, since 6s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:00 np0005532761 systemd[1]: libpod-61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666.scope: Deactivated successfully.
Nov 23 15:42:00 np0005532761 podman[91080]: 2025-11-23 20:42:00.130017019 +0000 UTC m=+0.024988025 container died 61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666 (image=quay.io/ceph/ceph:v19, name=sleepy_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 23 15:42:00 np0005532761 systemd[1]: session-34.scope: Deactivated successfully.
Nov 23 15:42:00 np0005532761 systemd[1]: session-34.scope: Consumed 4.154s CPU time.
Nov 23 15:42:00 np0005532761 systemd-logind[820]: Session 34 logged out. Waiting for processes to exit.
Nov 23 15:42:00 np0005532761 systemd-logind[820]: Removed session 34.
Nov 23 15:42:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setuser ceph since I am not root
Nov 23 15:42:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setgroup ceph since I am not root
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: pidfile_write: ignore empty --pid-file
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532761 ceph-mon[74569]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'alerts'
Nov 23 15:42:00 np0005532761 systemd[1]: var-lib-containers-storage-overlay-988e2281a8b122d4523d08b2d8b5e79ccce2227af241ce1f998133af36401622-merged.mount: Deactivated successfully.
Nov 23 15:42:00 np0005532761 podman[91080]: 2025-11-23 20:42:00.305664965 +0000 UTC m=+0.200635931 container remove 61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666 (image=quay.io/ceph/ceph:v19, name=sleepy_brahmagupta, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:00 np0005532761 systemd[1]: libpod-conmon-61c91e7c30c751b0e858eded74a7da727a532356f945a17dde8af8c0e33e8666.scope: Deactivated successfully.
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'balancer'
Nov 23 15:42:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:00.317+0000 7f3fa6f1e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'cephadm'
Nov 23 15:42:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:00.401+0000 7f3fa6f1e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 23 15:42:00 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 23 15:42:00 np0005532761 python3[91141]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:00 np0005532761 podman[91142]: 2025-11-23 20:42:00.695203601 +0000 UTC m=+0.062850910 container create 804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1 (image=quay.io/ceph/ceph:v19, name=mystifying_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:42:00 np0005532761 podman[91142]: 2025-11-23 20:42:00.656515183 +0000 UTC m=+0.024162492 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:00 np0005532761 systemd[1]: Started libpod-conmon-804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1.scope.
Nov 23 15:42:00 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d5e008429685ad80406397fb77056e74146e935f5e7af49f866105163c4bc2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d5e008429685ad80406397fb77056e74146e935f5e7af49f866105163c4bc2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d5e008429685ad80406397fb77056e74146e935f5e7af49f866105163c4bc2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:00 np0005532761 podman[91142]: 2025-11-23 20:42:00.985387288 +0000 UTC m=+0.353034597 container init 804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1 (image=quay.io/ceph/ceph:v19, name=mystifying_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:42:00 np0005532761 podman[91142]: 2025-11-23 20:42:00.992123387 +0000 UTC m=+0.359770696 container start 804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1 (image=quay.io/ceph/ceph:v19, name=mystifying_kare, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:42:00 np0005532761 podman[91142]: 2025-11-23 20:42:00.996511924 +0000 UTC m=+0.364159223 container attach 804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1 (image=quay.io/ceph/ceph:v19, name=mystifying_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'crash'
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:01.200+0000 7f3fa6f1e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'dashboard'
Nov 23 15:42:01 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 15:42:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 23 15:42:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 15:42:01 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 23 15:42:01 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:01.810+0000 7f3fa6f1e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:42:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:42:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:42:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  from numpy import show_config as show_numpy_config
Nov 23 15:42:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:01.971+0000 7f3fa6f1e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'influx'
Nov 23 15:42:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:02.048+0000 7f3fa6f1e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'insights'
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'iostat'
Nov 23 15:42:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:02.194+0000 7f3fa6f1e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:42:02 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 15:42:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 15:42:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.oyehye(active, since 8s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:02 np0005532761 systemd[1]: libpod-804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1.scope: Deactivated successfully.
Nov 23 15:42:02 np0005532761 podman[91142]: 2025-11-23 20:42:02.277452057 +0000 UTC m=+1.645099366 container died 804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1 (image=quay.io/ceph/ceph:v19, name=mystifying_kare, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:02 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c7d5e008429685ad80406397fb77056e74146e935f5e7af49f866105163c4bc2-merged.mount: Deactivated successfully.
Nov 23 15:42:02 np0005532761 podman[91142]: 2025-11-23 20:42:02.325285188 +0000 UTC m=+1.692932497 container remove 804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1 (image=quay.io/ceph/ceph:v19, name=mystifying_kare, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:02 np0005532761 systemd[1]: libpod-conmon-804bb4a3b13f5a3008c5734f3fa150f2c6068e91f1b5b1fb380ee6124e959fa1.scope: Deactivated successfully.
Nov 23 15:42:02 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.1e deep-scrub starts
Nov 23 15:42:02 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 3.1e deep-scrub ok
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'localpool'
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mirroring'
Nov 23 15:42:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'nfs'
Nov 23 15:42:03 np0005532761 python3[91282]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:42:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:03.222+0000 7f3fa6f1e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:42:03 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 15:42:03 np0005532761 python3[91353]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930522.859813-37514-153288002359312/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:42:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:03.448+0000 7f3fa6f1e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:42:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:03.523+0000 7f3fa6f1e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_support'
Nov 23 15:42:03 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 23 15:42:03 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 23 15:42:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:03.594+0000 7f3fa6f1e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:42:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:03.670+0000 7f3fa6f1e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'progress'
Nov 23 15:42:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:03.745+0000 7f3fa6f1e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'prometheus'
Nov 23 15:42:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:03 np0005532761 python3[91403]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:03 np0005532761 podman[91404]: 2025-11-23 20:42:03.995963892 +0000 UTC m=+0.046944787 container create f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb (image=quay.io/ceph/ceph:v19, name=interesting_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 15:42:04 np0005532761 systemd[1]: Started libpod-conmon-f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb.scope.
Nov 23 15:42:04 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35becd138a86d56f2149eb27cb422ca96e34258588cc43e89d05e4260b9efbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35becd138a86d56f2149eb27cb422ca96e34258588cc43e89d05e4260b9efbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35becd138a86d56f2149eb27cb422ca96e34258588cc43e89d05e4260b9efbc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:04 np0005532761 podman[91404]: 2025-11-23 20:42:03.976102375 +0000 UTC m=+0.027083300 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:04 np0005532761 podman[91404]: 2025-11-23 20:42:04.07945426 +0000 UTC m=+0.130435155 container init f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb (image=quay.io/ceph/ceph:v19, name=interesting_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 15:42:04 np0005532761 podman[91404]: 2025-11-23 20:42:04.08548697 +0000 UTC m=+0.136467855 container start f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb (image=quay.io/ceph/ceph:v19, name=interesting_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:42:04 np0005532761 podman[91404]: 2025-11-23 20:42:04.088600064 +0000 UTC m=+0.139580949 container attach f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb (image=quay.io/ceph/ceph:v19, name=interesting_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:42:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:04.093+0000 7f3fa6f1e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532761 ceph-mgr[74869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:42:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:04.193+0000 7f3fa6f1e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532761 ceph-mgr[74869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'restful'
Nov 23 15:42:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rgw'
Nov 23 15:42:04 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 23 15:42:04 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 23 15:42:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:04.629+0000 7f3fa6f1e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532761 ceph-mgr[74869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rook'
Nov 23 15:42:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:05.205+0000 7f3fa6f1e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'selftest'
Nov 23 15:42:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:05.280+0000 7f3fa6f1e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:42:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:05.364+0000 7f3fa6f1e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'stats'
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'status'
Nov 23 15:42:05 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 23 15:42:05 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 23 15:42:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:05.519+0000 7f3fa6f1e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telegraf'
Nov 23 15:42:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:05.594+0000 7f3fa6f1e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telemetry'
Nov 23 15:42:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:05.756+0000 7f3fa6f1e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:42:05 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz restarted
Nov 23 15:42:05 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz started
Nov 23 15:42:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:06.022+0000 7f3fa6f1e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'volumes'
Nov 23 15:42:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:06.315+0000 7f3fa6f1e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'zabbix'
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.oyehye(active, since 12s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 15:42:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:06.384+0000 7f3fa6f1e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Active manager daemon compute-0.oyehye restarted
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.oyehye
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: ms_deliver_dispatch: unhandled message 0x556e08ee9860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr respawn  1: '-n'
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.oyehye(active, starting, since 0.0342255s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp restarted
Nov 23 15:42:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp started
Nov 23 15:42:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setuser ceph since I am not root
Nov 23 15:42:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setgroup ceph since I am not root
Nov 23 15:42:06 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: pidfile_write: ignore empty --pid-file
Nov 23 15:42:06 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'alerts'
Nov 23 15:42:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:06.614+0000 7fc241b65140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'balancer'
Nov 23 15:42:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:06.701+0000 7fc241b65140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'cephadm'
Nov 23 15:42:07 np0005532761 ceph-mon[74569]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:42:07 np0005532761 ceph-mon[74569]: Activating manager daemon compute-0.oyehye
Nov 23 15:42:07 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.oyehye(active, starting, since 1.04996s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:07 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'crash'
Nov 23 15:42:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:07.541+0000 7fc241b65140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:07 np0005532761 ceph-mgr[74869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:07 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'dashboard'
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:42:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:08.195+0000 7fc241b65140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:42:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:42:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:42:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  from numpy import show_config as show_numpy_config
Nov 23 15:42:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:08.388+0000 7fc241b65140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'influx'
Nov 23 15:42:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:08.460+0000 7fc241b65140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'insights'
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'iostat'
Nov 23 15:42:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:08.602+0000 7fc241b65140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:42:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:08 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'localpool'
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mirroring'
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'nfs'
Nov 23 15:42:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:09.642+0000 7fc241b65140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:42:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:09.854+0000 7fc241b65140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:42:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:09.933+0000 7fc241b65140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_support'
Nov 23 15:42:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:10.019+0000 7fc241b65140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:42:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:10.099+0000 7fc241b65140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'progress'
Nov 23 15:42:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:10.181+0000 7fc241b65140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'prometheus'
Nov 23 15:42:10 np0005532761 systemd[1]: Stopping User Manager for UID 42477...
Nov 23 15:42:10 np0005532761 systemd[75910]: Activating special unit Exit the Session...
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped target Main User Target.
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped target Basic System.
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped target Paths.
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped target Sockets.
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped target Timers.
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 15:42:10 np0005532761 systemd[75910]: Closed D-Bus User Message Bus Socket.
Nov 23 15:42:10 np0005532761 systemd[75910]: Stopped Create User's Volatile Files and Directories.
Nov 23 15:42:10 np0005532761 systemd[75910]: Removed slice User Application Slice.
Nov 23 15:42:10 np0005532761 systemd[75910]: Reached target Shutdown.
Nov 23 15:42:10 np0005532761 systemd[75910]: Finished Exit the Session.
Nov 23 15:42:10 np0005532761 systemd[75910]: Reached target Exit the Session.
Nov 23 15:42:10 np0005532761 systemd[1]: user@42477.service: Deactivated successfully.
Nov 23 15:42:10 np0005532761 systemd[1]: Stopped User Manager for UID 42477.
Nov 23 15:42:10 np0005532761 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 23 15:42:10 np0005532761 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 23 15:42:10 np0005532761 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 23 15:42:10 np0005532761 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 23 15:42:10 np0005532761 systemd[1]: Removed slice User Slice of UID 42477.
Nov 23 15:42:10 np0005532761 systemd[1]: user-42477.slice: Consumed 27.853s CPU time.
Nov 23 15:42:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:10.539+0000 7fc241b65140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:42:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:10.645+0000 7fc241b65140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'restful'
Nov 23 15:42:10 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rgw'
Nov 23 15:42:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:11.099+0000 7fc241b65140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rook'
Nov 23 15:42:11 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz restarted
Nov 23 15:42:11 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz started
Nov 23 15:42:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:11.671+0000 7fc241b65140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'selftest'
Nov 23 15:42:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:11.744+0000 7fc241b65140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:42:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:11.828+0000 7fc241b65140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'stats'
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'status'
Nov 23 15:42:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:11.985+0000 7fc241b65140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telegraf'
Nov 23 15:42:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:12.060+0000 7fc241b65140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telemetry'
Nov 23 15:42:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:12.220+0000 7fc241b65140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:42:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:12.441+0000 7fc241b65140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'volumes'
Nov 23 15:42:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:12.729+0000 7fc241b65140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'zabbix'
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp restarted
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp started
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.oyehye(active, starting, since 6s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:42:12.804+0000 7fc241b65140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Active manager daemon compute-0.oyehye restarted
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.oyehye
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: ms_deliver_dispatch: unhandled message 0x55ed9132d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map Activating!
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map I am now activating
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.oyehye(active, starting, since 0.085284s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e1 all = 1
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: balancer
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] Starting
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Manager daemon compute-0.oyehye is now available
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:42:12
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: cephadm
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: crash
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: dashboard
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [dashboard INFO sso] Loading SSO DB version=1
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: devicehealth
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Starting
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: iostat
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: nfs
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: orchestrator
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: pg_autoscaler
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: progress
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [progress INFO root] Loading...
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fc1c6f1ca90>, <progress.module.GhostEvent object at 0x7fc1c6f1c820>, <progress.module.GhostEvent object at 0x7fc1c6f1c7f0>, <progress.module.GhostEvent object at 0x7fc1c6f1caf0>, <progress.module.GhostEvent object at 0x7fc1c6f1cb20>, <progress.module.GhostEvent object at 0x7fc1c6f1cb50>, <progress.module.GhostEvent object at 0x7fc1c6f1cb80>, <progress.module.GhostEvent object at 0x7fc1c6f1cbb0>, <progress.module.GhostEvent object at 0x7fc1c6f1cbe0>, <progress.module.GhostEvent object at 0x7fc1c6f1cc10>] historic events
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded OSDMap, ready.
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] recovery thread starting
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] starting setup
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: rbd_support
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"} v 0)
Nov 23 15:42:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: restful
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: status
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: telemetry
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [restful INFO root] server_addr: :: server_port: 8003
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [restful WARNING root] server not running: no certificate configured
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 23 15:42:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] PerfHandler: starting
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: volumes
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TaskHandler: starting
Nov 23 15:42:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"} v 0)
Nov 23 15:42:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] setup complete
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 23 15:42:13 np0005532761 systemd-logind[820]: New session 35 of user ceph-admin.
Nov 23 15:42:13 np0005532761 systemd[1]: Created slice User Slice of UID 42477.
Nov 23 15:42:13 np0005532761 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 23 15:42:13 np0005532761 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 23 15:42:13 np0005532761 systemd[1]: Starting User Manager for UID 42477...
Nov 23 15:42:13 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.module] Engine started.
Nov 23 15:42:13 np0005532761 systemd[91608]: Queued start job for default target Main User Target.
Nov 23 15:42:13 np0005532761 systemd[91608]: Created slice User Application Slice.
Nov 23 15:42:13 np0005532761 systemd[91608]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:42:13 np0005532761 systemd[91608]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:42:13 np0005532761 systemd[91608]: Reached target Paths.
Nov 23 15:42:13 np0005532761 systemd[91608]: Reached target Timers.
Nov 23 15:42:13 np0005532761 systemd[91608]: Starting D-Bus User Message Bus Socket...
Nov 23 15:42:13 np0005532761 systemd[91608]: Starting Create User's Volatile Files and Directories...
Nov 23 15:42:13 np0005532761 systemd[91608]: Finished Create User's Volatile Files and Directories.
Nov 23 15:42:13 np0005532761 systemd[91608]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:42:13 np0005532761 systemd[91608]: Reached target Sockets.
Nov 23 15:42:13 np0005532761 systemd[91608]: Reached target Basic System.
Nov 23 15:42:13 np0005532761 systemd[91608]: Reached target Main User Target.
Nov 23 15:42:13 np0005532761 systemd[91608]: Startup finished in 116ms.
Nov 23 15:42:13 np0005532761 systemd[1]: Started User Manager for UID 42477.
Nov 23 15:42:13 np0005532761 systemd[1]: Started Session 35 of User ceph-admin.
Nov 23 15:42:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:42:14] ENGINE Bus STARTING
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:42:14] ENGINE Bus STARTING
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:42:14] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:42:14] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:42:14] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:42:14] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:42:14] ENGINE Bus STARTED
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:42:14] ENGINE Bus STARTED
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:42:14] ENGINE Client ('192.168.122.100', 49202) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:42:14] ENGINE Client ('192.168.122.100', 49202) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:42:14 np0005532761 ceph-mgr[74869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: Activating manager daemon compute-0.oyehye
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: Manager daemon compute-0.oyehye is now available
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:42:15 np0005532761 podman[91748]: 2025-11-23 20:42:15.342921121 +0000 UTC m=+1.150651645 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 15:42:15 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:42:15 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.oyehye(active, since 2s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 23 15:42:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 23 15:42:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0[74565]: 2025-11-23T20:42:15.389+0000 7fe46725b640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e2 new map
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-11-23T20:42:15:389935+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:15.389822+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 23 15:42:15 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:15 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 23 15:42:15 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:15 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 23 15:42:15 np0005532761 podman[91748]: 2025-11-23 20:42:15.47544043 +0000 UTC m=+1.283170974 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:42:15 np0005532761 systemd[1]: libpod-f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb.scope: Deactivated successfully.
Nov 23 15:42:15 np0005532761 podman[91404]: 2025-11-23 20:42:15.504772849 +0000 UTC m=+11.555753724 container died f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb (image=quay.io/ceph/ceph:v19, name=interesting_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:42:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d35becd138a86d56f2149eb27cb422ca96e34258588cc43e89d05e4260b9efbc-merged.mount: Deactivated successfully.
Nov 23 15:42:15 np0005532761 podman[91404]: 2025-11-23 20:42:15.551155861 +0000 UTC m=+11.602136756 container remove f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb (image=quay.io/ceph/ceph:v19, name=interesting_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:42:15 np0005532761 systemd[1]: libpod-conmon-f59c5285cba5a5e736fe38baad337755c28a38f75eb164927b47559c888964fb.scope: Deactivated successfully.
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:15 np0005532761 python3[91894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:16 np0005532761 podman[91938]: 2025-11-23 20:42:16.008767136 +0000 UTC m=+0.053133293 container create ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f (image=quay.io/ceph/ceph:v19, name=elated_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 15:42:16 np0005532761 systemd[1]: Started libpod-conmon-ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f.scope.
Nov 23 15:42:16 np0005532761 podman[91938]: 2025-11-23 20:42:15.983659699 +0000 UTC m=+0.028025876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:16 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f1bfd836e436815014d7f388b876ca5a5080fb059cb121706dc292984628f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f1bfd836e436815014d7f388b876ca5a5080fb059cb121706dc292984628f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f1bfd836e436815014d7f388b876ca5a5080fb059cb121706dc292984628f9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:16 np0005532761 podman[91938]: 2025-11-23 20:42:16.103164852 +0000 UTC m=+0.147531019 container init ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f (image=quay.io/ceph/ceph:v19, name=elated_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:16 np0005532761 podman[91938]: 2025-11-23 20:42:16.112110661 +0000 UTC m=+0.156476838 container start ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f (image=quay.io/ceph/ceph:v19, name=elated_gates, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:16 np0005532761 podman[91938]: 2025-11-23 20:42:16.116187419 +0000 UTC m=+0.160553766 container attach ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f (image=quay.io/ceph/ceph:v19, name=elated_gates, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:42:14] ENGINE Bus STARTING
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:42:14] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:42:14] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:42:14] ENGINE Bus STARTED
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:42:14] ENGINE Client ('192.168.122.100', 49202) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.oyehye(active, since 3s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 elated_gates[91972]: Scheduled mds.cephfs update...
Nov 23 15:42:16 np0005532761 podman[91938]: 2025-11-23 20:42:16.516323907 +0000 UTC m=+0.560690064 container died ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f (image=quay.io/ceph/ceph:v19, name=elated_gates, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 15:42:16 np0005532761 systemd[1]: libpod-ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f.scope: Deactivated successfully.
Nov 23 15:42:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-58f1bfd836e436815014d7f388b876ca5a5080fb059cb121706dc292984628f9-merged.mount: Deactivated successfully.
Nov 23 15:42:16 np0005532761 podman[91938]: 2025-11-23 20:42:16.558171118 +0000 UTC m=+0.602537275 container remove ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f (image=quay.io/ceph/ceph:v19, name=elated_gates, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:42:16 np0005532761 systemd[1]: libpod-conmon-ec3466765c34f4c1fe5c8a0f1d20e0274da148ba66ab2a480dc6b40df6d7f22f.scope: Deactivated successfully.
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:16 np0005532761 python3[92113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:16 np0005532761 podman[92130]: 2025-11-23 20:42:16.92493155 +0000 UTC m=+0.050003739 container create 35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406 (image=quay.io/ceph/ceph:v19, name=intelligent_napier, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:42:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:42:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:42:16 np0005532761 systemd[1]: Started libpod-conmon-35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406.scope.
Nov 23 15:42:17 np0005532761 podman[92130]: 2025-11-23 20:42:16.906610083 +0000 UTC m=+0.031682302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:17 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:17 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886ed7810157d7d8a7262ea1b0fa14657b1383a3e80af2670c72ae07d11043f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:17 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886ed7810157d7d8a7262ea1b0fa14657b1383a3e80af2670c72ae07d11043f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:17 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886ed7810157d7d8a7262ea1b0fa14657b1383a3e80af2670c72ae07d11043f7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:17 np0005532761 podman[92130]: 2025-11-23 20:42:17.022953493 +0000 UTC m=+0.148025692 container init 35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406 (image=quay.io/ceph/ceph:v19, name=intelligent_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 15:42:17 np0005532761 podman[92130]: 2025-11-23 20:42:17.030073013 +0000 UTC m=+0.155145212 container start 35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406 (image=quay.io/ceph/ceph:v19, name=intelligent_napier, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:17 np0005532761 podman[92130]: 2025-11-23 20:42:17.03598839 +0000 UTC m=+0.161060589 container attach 35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406 (image=quay.io/ceph/ceph:v19, name=intelligent_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.oyehye(active, since 4s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 23 15:42:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:42:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v7: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 23 15:42:19 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 40 pg[8.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:42:19 np0005532761 ceph-mgr[74869]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Nov 23 15:42:19 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:19 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 systemd[1]: libpod-35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406.scope: Deactivated successfully.
Nov 23 15:42:20 np0005532761 podman[92130]: 2025-11-23 20:42:20.059294711 +0000 UTC m=+3.184366980 container died 35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406 (image=quay.io/ceph/ceph:v19, name=intelligent_napier, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:42:20 np0005532761 systemd[1]: var-lib-containers-storage-overlay-886ed7810157d7d8a7262ea1b0fa14657b1383a3e80af2670c72ae07d11043f7-merged.mount: Deactivated successfully.
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:42:20 np0005532761 podman[92130]: 2025-11-23 20:42:20.285969422 +0000 UTC m=+3.411041621 container remove 35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406 (image=quay.io/ceph/ceph:v19, name=intelligent_napier, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev c63c5b5c-71be-44f0-812a-83195586bdaa (Updating node-exporter deployment (+3 -> 3))
Nov 23 15:42:20 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Nov 23 15:42:20 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Nov 23 15:42:20 np0005532761 systemd[1]: libpod-conmon-35c0286a1f048272931d5da7f581b7229058beb805bc5b27a2419abfed5e7406.scope: Deactivated successfully.
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: Deploying daemon node-exporter.compute-0 on compute-0
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 23 15:42:20 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 23 15:42:20 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v10: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 4 op/s
Nov 23 15:42:20 np0005532761 python3[93253]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:42:20 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:20 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:21 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:21 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:21 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:21 np0005532761 python3[93366]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930540.6422184-37545-167787510883701/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=756e8313f47ae598921d0392828cdc60f53012e2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:42:21 np0005532761 systemd[1]: Starting Ceph node-exporter.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:21 np0005532761 bash[93479]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 23 15:42:21 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 15:42:21 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.oyehye(active, since 8s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:42:21 np0005532761 python3[93517]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:21 np0005532761 podman[93518]: 2025-11-23 20:42:21.899505279 +0000 UTC m=+0.040423025 container create 8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8 (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:42:21 np0005532761 systemd[1]: Started libpod-conmon-8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8.scope.
Nov 23 15:42:21 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd36dcadeac2ce4b5f73808f7eb6b5ceed66d7ce18f677e7cee0dda8ca4dcac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd36dcadeac2ce4b5f73808f7eb6b5ceed66d7ce18f677e7cee0dda8ca4dcac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:21 np0005532761 podman[93518]: 2025-11-23 20:42:21.974022599 +0000 UTC m=+0.114940345 container init 8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8 (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:42:21 np0005532761 podman[93518]: 2025-11-23 20:42:21.882316923 +0000 UTC m=+0.023234669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:21 np0005532761 bash[93479]: Getting image source signatures
Nov 23 15:42:21 np0005532761 bash[93479]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 23 15:42:21 np0005532761 bash[93479]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 23 15:42:21 np0005532761 bash[93479]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 23 15:42:21 np0005532761 podman[93518]: 2025-11-23 20:42:21.980677485 +0000 UTC m=+0.121595231 container start 8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8 (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 15:42:21 np0005532761 podman[93518]: 2025-11-23 20:42:21.984766974 +0000 UTC m=+0.125684760 container attach 8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8 (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 23 15:42:22 np0005532761 systemd[1]: libpod-8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8.scope: Deactivated successfully.
Nov 23 15:42:22 np0005532761 podman[93518]: 2025-11-23 20:42:22.44922362 +0000 UTC m=+0.590141366 container died 8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8 (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1dd36dcadeac2ce4b5f73808f7eb6b5ceed66d7ce18f677e7cee0dda8ca4dcac-merged.mount: Deactivated successfully.
Nov 23 15:42:22 np0005532761 bash[93479]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 23 15:42:22 np0005532761 bash[93479]: Writing manifest to image destination
Nov 23 15:42:22 np0005532761 podman[93518]: 2025-11-23 20:42:22.591536731 +0000 UTC m=+0.732454477 container remove 8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8 (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:22 np0005532761 systemd[1]: libpod-conmon-8dd5db62ed966df18e309748ca11b3037a3c70a2632d6e43f06c9ee394d82ec8.scope: Deactivated successfully.
Nov 23 15:42:22 np0005532761 podman[93479]: 2025-11-23 20:42:22.611796129 +0000 UTC m=+1.063993452 container create c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:42:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e912ee898ec27493be1b78d92530714cabb452f2e0687436789fea7a8d7a896/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:22 np0005532761 podman[93479]: 2025-11-23 20:42:22.664083527 +0000 UTC m=+1.116280850 container init c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:42:22 np0005532761 podman[93479]: 2025-11-23 20:42:22.670285062 +0000 UTC m=+1.122482365 container start c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:42:22 np0005532761 podman[93479]: 2025-11-23 20:42:22.594417817 +0000 UTC m=+1.046615170 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 23 15:42:22 np0005532761 bash[93479]: c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.676Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.676Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.677Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.677Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.677Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.677Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=arp
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=bcache
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=bonding
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=cpu
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=dmi
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=edac
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=entropy
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=filefd
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=netclass
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=netdev
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=netstat
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=nfs
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=nvme
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=os
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=pressure
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=rapl
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=selinux
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=softnet
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=stat
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=textfile
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=time
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=uname
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=xfs
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=node_exporter.go:117 level=info collector=zfs
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 23 15:42:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0[93634]: ts=2025-11-23T20:42:22.678Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 23 15:42:22 np0005532761 systemd[1]: Started Ceph node-exporter.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 23 15:42:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:22 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Nov 23 15:42:22 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Nov 23 15:42:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v11: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 23 15:42:23 np0005532761 python3[93669]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:23 np0005532761 podman[93671]: 2025-11-23 20:42:23.419761049 +0000 UTC m=+0.045629792 container create e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87 (image=quay.io/ceph/ceph:v19, name=admiring_kilby, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:42:23 np0005532761 systemd[1]: Started libpod-conmon-e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87.scope.
Nov 23 15:42:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f882efbb95069ef5468179c6e495540ee5faaa3d987e57f15e5538b7cc1450/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f882efbb95069ef5468179c6e495540ee5faaa3d987e57f15e5538b7cc1450/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:23 np0005532761 podman[93671]: 2025-11-23 20:42:23.491667308 +0000 UTC m=+0.117536081 container init e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87 (image=quay.io/ceph/ceph:v19, name=admiring_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:23 np0005532761 podman[93671]: 2025-11-23 20:42:23.498031258 +0000 UTC m=+0.123900001 container start e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87 (image=quay.io/ceph/ceph:v19, name=admiring_kilby, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:23 np0005532761 podman[93671]: 2025-11-23 20:42:23.402760888 +0000 UTC m=+0.028629651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:23 np0005532761 podman[93671]: 2025-11-23 20:42:23.501883371 +0000 UTC m=+0.127752144 container attach e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87 (image=quay.io/ceph/ceph:v19, name=admiring_kilby, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:23 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:23 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:23 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:23 np0005532761 ceph-mon[74569]: Deploying daemon node-exporter.compute-1 on compute-1
Nov 23 15:42:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 15:42:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890036027' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 15:42:23 np0005532761 admiring_kilby[93688]: 
Nov 23 15:42:23 np0005532761 admiring_kilby[93688]: {"fsid":"03808be8-ae4a-5548-82e6-4a294f1bc627","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":68,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1763930505,"num_in_osds":3,"osd_in_since":1763930484,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84193280,"bytes_avail":64327733248,"bytes_total":64411926528,"read_bytes_sec":2900,"write_bytes_sec":0,"read_op_per_sec":3,"write_op_per_sec":1},"fsmap":{"epoch":2,"btime":"2025-11-23T20:42:15:389935+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":2,"modified":"2025-11-23T20:40:19.668770+0000","services":{}},"progress_events":{"c63c5b5c-71be-44f0-812a-83195586bdaa":{"message":"Updating node-exporter deployment (+3 -> 3) (2s)\n      [=========...................] (remaining: 4s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Nov 23 15:42:23 np0005532761 systemd[1]: libpod-e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87.scope: Deactivated successfully.
Nov 23 15:42:23 np0005532761 podman[93671]: 2025-11-23 20:42:23.941797635 +0000 UTC m=+0.567666378 container died e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87 (image=quay.io/ceph/ceph:v19, name=admiring_kilby, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-24f882efbb95069ef5468179c6e495540ee5faaa3d987e57f15e5538b7cc1450-merged.mount: Deactivated successfully.
Nov 23 15:42:23 np0005532761 podman[93671]: 2025-11-23 20:42:23.984612043 +0000 UTC m=+0.610480786 container remove e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87 (image=quay.io/ceph/ceph:v19, name=admiring_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:23 np0005532761 systemd[1]: libpod-conmon-e59c46ba0b2c745d5c7608ca5bb9fdc13c1faf4aef3a3c14ec3e3fe05fe0ea87.scope: Deactivated successfully.
Nov 23 15:42:24 np0005532761 python3[93748]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:24 np0005532761 podman[93749]: 2025-11-23 20:42:24.339065757 +0000 UTC m=+0.056425640 container create 3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080 (image=quay.io/ceph/ceph:v19, name=upbeat_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:24 np0005532761 systemd[1]: Started libpod-conmon-3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080.scope.
Nov 23 15:42:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/500d72db69afaa2cf9d5a1f6154e9338926631b79d3d10a44bc7e1d474dc96f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/500d72db69afaa2cf9d5a1f6154e9338926631b79d3d10a44bc7e1d474dc96f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:24 np0005532761 podman[93749]: 2025-11-23 20:42:24.308757902 +0000 UTC m=+0.026117855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:24 np0005532761 podman[93749]: 2025-11-23 20:42:24.412309032 +0000 UTC m=+0.129668905 container init 3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080 (image=quay.io/ceph/ceph:v19, name=upbeat_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 15:42:24 np0005532761 podman[93749]: 2025-11-23 20:42:24.42199829 +0000 UTC m=+0.139358163 container start 3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080 (image=quay.io/ceph/ceph:v19, name=upbeat_chaum, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:24 np0005532761 podman[93749]: 2025-11-23 20:42:24.426869679 +0000 UTC m=+0.144229562 container attach 3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080 (image=quay.io/ceph/ceph:v19, name=upbeat_chaum, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 15:42:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935510662' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 15:42:24 np0005532761 upbeat_chaum[93765]: 
Nov 23 15:42:24 np0005532761 upbeat_chaum[93765]: {"epoch":3,"fsid":"03808be8-ae4a-5548-82e6-4a294f1bc627","modified":"2025-11-23T20:41:10.249176Z","created":"2025-11-23T20:38:54.371685Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 23 15:42:24 np0005532761 upbeat_chaum[93765]: dumped monmap epoch 3
Nov 23 15:42:24 np0005532761 systemd[1]: libpod-3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080.scope: Deactivated successfully.
Nov 23 15:42:24 np0005532761 podman[93749]: 2025-11-23 20:42:24.837896246 +0000 UTC m=+0.555256099 container died 3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080 (image=quay.io/ceph/ceph:v19, name=upbeat_chaum, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-500d72db69afaa2cf9d5a1f6154e9338926631b79d3d10a44bc7e1d474dc96f2-merged.mount: Deactivated successfully.
Nov 23 15:42:24 np0005532761 podman[93749]: 2025-11-23 20:42:24.871317684 +0000 UTC m=+0.588677537 container remove 3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080 (image=quay.io/ceph/ceph:v19, name=upbeat_chaum, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:24 np0005532761 systemd[1]: libpod-conmon-3d8ee84041c11d8d5b25d71c8fc717652a5ebad49b0b4493b6c5b9739db8a080.scope: Deactivated successfully.
Nov 23 15:42:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v12: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Nov 23 15:42:25 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Nov 23 15:42:25 np0005532761 python3[93826]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:25 np0005532761 podman[93827]: 2025-11-23 20:42:25.536988272 +0000 UTC m=+0.035720018 container create 2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e (image=quay.io/ceph/ceph:v19, name=confident_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:25 np0005532761 systemd[1]: Started libpod-conmon-2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e.scope.
Nov 23 15:42:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e424ac6fe18955040ec706f8012f4beefba3daa0cb3863cba3529ad9f112d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e424ac6fe18955040ec706f8012f4beefba3daa0cb3863cba3529ad9f112d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:25 np0005532761 podman[93827]: 2025-11-23 20:42:25.603315863 +0000 UTC m=+0.102047629 container init 2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e (image=quay.io/ceph/ceph:v19, name=confident_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:25 np0005532761 podman[93827]: 2025-11-23 20:42:25.613021738 +0000 UTC m=+0.111753504 container start 2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e (image=quay.io/ceph/ceph:v19, name=confident_leakey, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:42:25 np0005532761 podman[93827]: 2025-11-23 20:42:25.616857418 +0000 UTC m=+0.115589184 container attach 2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e (image=quay.io/ceph/ceph:v19, name=confident_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 15:42:25 np0005532761 podman[93827]: 2025-11-23 20:42:25.522776539 +0000 UTC m=+0.021508305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532761 ceph-mon[74569]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 23 15:42:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Nov 23 15:42:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1987053989' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 23 15:42:26 np0005532761 confident_leakey[93842]: [client.openstack]
Nov 23 15:42:26 np0005532761 confident_leakey[93842]: #011key = AQC3cCNpAAAAABAAlqLdZNvpAVdz4ESvQvzNnA==
Nov 23 15:42:26 np0005532761 confident_leakey[93842]: #011caps mgr = "allow *"
Nov 23 15:42:26 np0005532761 confident_leakey[93842]: #011caps mon = "profile rbd"
Nov 23 15:42:26 np0005532761 confident_leakey[93842]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 23 15:42:26 np0005532761 systemd[1]: libpod-2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e.scope: Deactivated successfully.
Nov 23 15:42:26 np0005532761 podman[93827]: 2025-11-23 20:42:26.060636145 +0000 UTC m=+0.559367891 container died 2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e (image=quay.io/ceph/ceph:v19, name=confident_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 15:42:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c8e424ac6fe18955040ec706f8012f4beefba3daa0cb3863cba3529ad9f112d7-merged.mount: Deactivated successfully.
Nov 23 15:42:26 np0005532761 podman[93827]: 2025-11-23 20:42:26.102007021 +0000 UTC m=+0.600738787 container remove 2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e (image=quay.io/ceph/ceph:v19, name=confident_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:42:26 np0005532761 systemd[1]: libpod-conmon-2987cd7b93f358cb166faa2a02794e9eabd55898d20a3c37e2da36e532f0297e.scope: Deactivated successfully.
Nov 23 15:42:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v13: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Nov 23 15:42:27 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/1987053989' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 23 15:42:27 np0005532761 ansible-async_wrapper.py[94027]: Invoked with j965155680616 30 /home/zuul/.ansible/tmp/ansible-tmp-1763930547.057436-37617-176403467499938/AnsiballZ_command.py _
Nov 23 15:42:27 np0005532761 ansible-async_wrapper.py[94030]: Starting module and watcher
Nov 23 15:42:27 np0005532761 ansible-async_wrapper.py[94030]: Start watching 94031 (30)
Nov 23 15:42:27 np0005532761 ansible-async_wrapper.py[94031]: Start module (94031)
Nov 23 15:42:27 np0005532761 ansible-async_wrapper.py[94027]: Return async_wrapper task started.
Nov 23 15:42:27 np0005532761 python3[94032]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:28 np0005532761 podman[94033]: 2025-11-23 20:42:28.007581464 +0000 UTC m=+0.026838726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:28 np0005532761 podman[94033]: 2025-11-23 20:42:28.204257695 +0000 UTC m=+0.223514937 container create 14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335 (image=quay.io/ceph/ceph:v19, name=objective_heyrovsky, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:28 np0005532761 systemd[1]: Started libpod-conmon-14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335.scope.
Nov 23 15:42:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63631100094b18f2d67d4595d572444a7f664f4ae2d8d0ac1eef03c4670e63d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63631100094b18f2d67d4595d572444a7f664f4ae2d8d0ac1eef03c4670e63d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:42:28 np0005532761 podman[94033]: 2025-11-23 20:42:28.276737847 +0000 UTC m=+0.295995109 container init 14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335 (image=quay.io/ceph/ceph:v19, name=objective_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:42:28 np0005532761 podman[94033]: 2025-11-23 20:42:28.283102935 +0000 UTC m=+0.302360177 container start 14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335 (image=quay.io/ceph/ceph:v19, name=objective_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:28 np0005532761 podman[94033]: 2025-11-23 20:42:28.286512434 +0000 UTC m=+0.305769676 container attach 14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335 (image=quay.io/ceph/ceph:v19, name=objective_heyrovsky, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:28 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev c63c5b5c-71be-44f0-812a-83195586bdaa (Updating node-exporter deployment (+3 -> 3))
Nov 23 15:42:28 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event c63c5b5c-71be-44f0-812a-83195586bdaa (Updating node-exporter deployment (+3 -> 3)) in 8 seconds
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:28 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 15:42:28 np0005532761 objective_heyrovsky[94048]: 
Nov 23 15:42:28 np0005532761 objective_heyrovsky[94048]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 23 15:42:28 np0005532761 systemd[1]: libpod-14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335.scope: Deactivated successfully.
Nov 23 15:42:28 np0005532761 podman[94033]: 2025-11-23 20:42:28.655254671 +0000 UTC m=+0.674511913 container died 14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335 (image=quay.io/ceph/ceph:v19, name=objective_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:42:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-63631100094b18f2d67d4595d572444a7f664f4ae2d8d0ac1eef03c4670e63d2-merged.mount: Deactivated successfully.
Nov 23 15:42:28 np0005532761 podman[94033]: 2025-11-23 20:42:28.691901314 +0000 UTC m=+0.711158556 container remove 14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335 (image=quay.io/ceph/ceph:v19, name=objective_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:28 np0005532761 systemd[1]: libpod-conmon-14ed8324cfb7cb5beea1583eda4eb432e7f4db798d8a218883ec2df723ca0335.scope: Deactivated successfully.
Nov 23 15:42:28 np0005532761 ansible-async_wrapper.py[94031]: Module complete (94031)
Nov 23 15:42:28 np0005532761 podman[94174]: 2025-11-23 20:42:28.763158833 +0000 UTC m=+0.035738508 container create 210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_meitner, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:28 np0005532761 systemd[1]: Started libpod-conmon-210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db.scope.
Nov 23 15:42:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:28 np0005532761 podman[94174]: 2025-11-23 20:42:28.822312586 +0000 UTC m=+0.094892311 container init 210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:28 np0005532761 podman[94174]: 2025-11-23 20:42:28.828887388 +0000 UTC m=+0.101467063 container start 210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_meitner, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:28 np0005532761 ecstatic_meitner[94209]: 167 167
Nov 23 15:42:28 np0005532761 systemd[1]: libpod-210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db.scope: Deactivated successfully.
Nov 23 15:42:28 np0005532761 conmon[94209]: conmon 210c47e002b4184daf04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db.scope/container/memory.events
Nov 23 15:42:28 np0005532761 podman[94174]: 2025-11-23 20:42:28.833219782 +0000 UTC m=+0.105799457 container attach 210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_meitner, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:28 np0005532761 podman[94174]: 2025-11-23 20:42:28.833400648 +0000 UTC m=+0.105980313 container died 210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:28 np0005532761 podman[94174]: 2025-11-23 20:42:28.747208915 +0000 UTC m=+0.019788610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-603393c5fa5c95c7672d95d3f1cf0b3113b3d00a3f9e44d7c077463f513a4b3e-merged.mount: Deactivated successfully.
Nov 23 15:42:28 np0005532761 podman[94174]: 2025-11-23 20:42:28.86782122 +0000 UTC m=+0.140400895 container remove 210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_meitner, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:28 np0005532761 systemd[1]: libpod-conmon-210c47e002b4184daf0471c9df337fa2154ef1850396a61f0b29938aaa8748db.scope: Deactivated successfully.
Nov 23 15:42:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v14: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 2 op/s
Nov 23 15:42:28 np0005532761 python3[94252]: ansible-ansible.legacy.async_status Invoked with jid=j965155680616.94027 mode=status _async_dir=/root/.ansible_async
Nov 23 15:42:29 np0005532761 podman[94263]: 2025-11-23 20:42:29.018517066 +0000 UTC m=+0.043921625 container create be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:29 np0005532761 systemd[1]: Started libpod-conmon-be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810.scope.
Nov 23 15:42:29 np0005532761 podman[94263]: 2025-11-23 20:42:28.997054902 +0000 UTC m=+0.022459441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a701929472158c5a151ba2e2f833b3f9d153ad5e071e19542ac8b5a44cc4df68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a701929472158c5a151ba2e2f833b3f9d153ad5e071e19542ac8b5a44cc4df68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a701929472158c5a151ba2e2f833b3f9d153ad5e071e19542ac8b5a44cc4df68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a701929472158c5a151ba2e2f833b3f9d153ad5e071e19542ac8b5a44cc4df68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a701929472158c5a151ba2e2f833b3f9d153ad5e071e19542ac8b5a44cc4df68/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:29 np0005532761 podman[94263]: 2025-11-23 20:42:29.11925323 +0000 UTC m=+0.144657779 container init be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:29 np0005532761 podman[94263]: 2025-11-23 20:42:29.12880136 +0000 UTC m=+0.154205879 container start be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_visvesvaraya, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:29 np0005532761 podman[94263]: 2025-11-23 20:42:29.133222276 +0000 UTC m=+0.158626785 container attach be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:42:29 np0005532761 python3[94334]: ansible-ansible.legacy.async_status Invoked with jid=j965155680616.94027 mode=cleanup _async_dir=/root/.ansible_async
Nov 23 15:42:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:42:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:29 np0005532761 laughing_visvesvaraya[94305]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:42:29 np0005532761 laughing_visvesvaraya[94305]: --> All data devices are unavailable
Nov 23 15:42:29 np0005532761 systemd[1]: libpod-be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810.scope: Deactivated successfully.
Nov 23 15:42:29 np0005532761 podman[94263]: 2025-11-23 20:42:29.433932418 +0000 UTC m=+0.459336977 container died be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a701929472158c5a151ba2e2f833b3f9d153ad5e071e19542ac8b5a44cc4df68-merged.mount: Deactivated successfully.
Nov 23 15:42:29 np0005532761 podman[94263]: 2025-11-23 20:42:29.483649943 +0000 UTC m=+0.509054462 container remove be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:29 np0005532761 systemd[1]: libpod-conmon-be83d34ce3ffb901c496a5d85ef5aa300da8b0e3c2a738b2eab336df98c22810.scope: Deactivated successfully.
Nov 23 15:42:29 np0005532761 python3[94433]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:29 np0005532761 podman[94441]: 2025-11-23 20:42:29.863284297 +0000 UTC m=+0.045454294 container create 949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843 (image=quay.io/ceph/ceph:v19, name=sweet_faraday, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 15:42:29 np0005532761 systemd[1]: Started libpod-conmon-949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843.scope.
Nov 23 15:42:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542995f2c14297711494be9b1f7ecbfda4bbc2900d2235ae5a7b3b14bda21903/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542995f2c14297711494be9b1f7ecbfda4bbc2900d2235ae5a7b3b14bda21903/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:29 np0005532761 podman[94441]: 2025-11-23 20:42:29.840188301 +0000 UTC m=+0.022358318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:29 np0005532761 podman[94441]: 2025-11-23 20:42:29.937092514 +0000 UTC m=+0.119262521 container init 949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843 (image=quay.io/ceph/ceph:v19, name=sweet_faraday, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:42:29 np0005532761 podman[94441]: 2025-11-23 20:42:29.942334202 +0000 UTC m=+0.124504199 container start 949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843 (image=quay.io/ceph/ceph:v19, name=sweet_faraday, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:29 np0005532761 podman[94441]: 2025-11-23 20:42:29.945429783 +0000 UTC m=+0.127599780 container attach 949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843 (image=quay.io/ceph/ceph:v19, name=sweet_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:42:30 np0005532761 podman[94492]: 2025-11-23 20:42:30.005950181 +0000 UTC m=+0.037520326 container create 36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_elion, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:30 np0005532761 systemd[1]: Started libpod-conmon-36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c.scope.
Nov 23 15:42:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:30 np0005532761 podman[94492]: 2025-11-23 20:42:30.063341647 +0000 UTC m=+0.094911822 container init 36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 15:42:30 np0005532761 podman[94492]: 2025-11-23 20:42:30.069441607 +0000 UTC m=+0.101011752 container start 36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:42:30 np0005532761 podman[94492]: 2025-11-23 20:42:30.072275351 +0000 UTC m=+0.103845516 container attach 36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:30 np0005532761 loving_elion[94508]: 167 167
Nov 23 15:42:30 np0005532761 systemd[1]: libpod-36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c.scope: Deactivated successfully.
Nov 23 15:42:30 np0005532761 podman[94492]: 2025-11-23 20:42:30.074203963 +0000 UTC m=+0.105774108 container died 36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:42:30 np0005532761 podman[94492]: 2025-11-23 20:42:29.988426432 +0000 UTC m=+0.019996597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d658708ddb9fa7e82dbaad5c913072f309f67ab3ad7e35a68ead48235a8eeb33-merged.mount: Deactivated successfully.
Nov 23 15:42:30 np0005532761 podman[94492]: 2025-11-23 20:42:30.109862618 +0000 UTC m=+0.141432763 container remove 36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_elion, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:30 np0005532761 systemd[1]: libpod-conmon-36bbf5d1da22c33629e06adadf9933b192f844f03faa83bdc946cf7756e42e4c.scope: Deactivated successfully.
Nov 23 15:42:30 np0005532761 podman[94550]: 2025-11-23 20:42:30.263132021 +0000 UTC m=+0.042334313 container create 8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gould, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 15:42:30 np0005532761 sweet_faraday[94488]: 
Nov 23 15:42:30 np0005532761 sweet_faraday[94488]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 23 15:42:30 np0005532761 systemd[1]: Started libpod-conmon-8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65.scope.
Nov 23 15:42:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:30 np0005532761 systemd[1]: libpod-949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843.scope: Deactivated successfully.
Nov 23 15:42:30 np0005532761 podman[94441]: 2025-11-23 20:42:30.317849227 +0000 UTC m=+0.500019244 container died 949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843 (image=quay.io/ceph/ceph:v19, name=sweet_faraday, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb4296065fe3f27dc2ea3fb59df7e3637f7d71c5815efdb7d34feb5bf567675/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb4296065fe3f27dc2ea3fb59df7e3637f7d71c5815efdb7d34feb5bf567675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb4296065fe3f27dc2ea3fb59df7e3637f7d71c5815efdb7d34feb5bf567675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb4296065fe3f27dc2ea3fb59df7e3637f7d71c5815efdb7d34feb5bf567675/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:30 np0005532761 podman[94550]: 2025-11-23 20:42:30.331190927 +0000 UTC m=+0.110393239 container init 8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 15:42:30 np0005532761 podman[94550]: 2025-11-23 20:42:30.241987586 +0000 UTC m=+0.021189898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:30 np0005532761 podman[94550]: 2025-11-23 20:42:30.338838018 +0000 UTC m=+0.118040320 container start 8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gould, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 23 15:42:30 np0005532761 podman[94550]: 2025-11-23 20:42:30.347795293 +0000 UTC m=+0.126997585 container attach 8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gould, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:30 np0005532761 podman[94441]: 2025-11-23 20:42:30.36671037 +0000 UTC m=+0.548880367 container remove 949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843 (image=quay.io/ceph/ceph:v19, name=sweet_faraday, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:42:30 np0005532761 systemd[1]: libpod-conmon-949448ff1eca126349f4667e2d53fcb9315909ed71d30e1d955d8a9beaed4843.scope: Deactivated successfully.
Nov 23 15:42:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-542995f2c14297711494be9b1f7ecbfda4bbc2900d2235ae5a7b3b14bda21903-merged.mount: Deactivated successfully.
Nov 23 15:42:30 np0005532761 focused_gould[94569]: {
Nov 23 15:42:30 np0005532761 focused_gould[94569]:    "1": [
Nov 23 15:42:30 np0005532761 focused_gould[94569]:        {
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "devices": [
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "/dev/loop3"
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            ],
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "lv_name": "ceph_lv0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "lv_size": "21470642176",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "name": "ceph_lv0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "tags": {
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.cluster_name": "ceph",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.crush_device_class": "",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.encrypted": "0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.osd_id": "1",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.type": "block",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.vdo": "0",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:                "ceph.with_tpm": "0"
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            },
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "type": "block",
Nov 23 15:42:30 np0005532761 focused_gould[94569]:            "vg_name": "ceph_vg0"
Nov 23 15:42:30 np0005532761 focused_gould[94569]:        }
Nov 23 15:42:30 np0005532761 focused_gould[94569]:    ]
Nov 23 15:42:30 np0005532761 focused_gould[94569]: }
Nov 23 15:42:30 np0005532761 systemd[1]: libpod-8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65.scope: Deactivated successfully.
Nov 23 15:42:30 np0005532761 podman[94550]: 2025-11-23 20:42:30.612278364 +0000 UTC m=+0.391480656 container died 8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gould, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:42:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-deb4296065fe3f27dc2ea3fb59df7e3637f7d71c5815efdb7d34feb5bf567675-merged.mount: Deactivated successfully.
Nov 23 15:42:30 np0005532761 podman[94550]: 2025-11-23 20:42:30.651628827 +0000 UTC m=+0.430831109 container remove 8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_gould, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:42:30 np0005532761 systemd[1]: libpod-conmon-8e8198aedb8caa1d6d62853a93984ff732315368d7d4accb0ae4ceca3d9a8c65.scope: Deactivated successfully.
Nov 23 15:42:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v15: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:31 np0005532761 python3[94679]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:31 np0005532761 podman[94719]: 2025-11-23 20:42:31.194843534 +0000 UTC m=+0.041044468 container create 65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kepler, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:31 np0005532761 podman[94726]: 2025-11-23 20:42:31.212322393 +0000 UTC m=+0.040019202 container create 34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe (image=quay.io/ceph/ceph:v19, name=quizzical_lehmann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:31 np0005532761 systemd[1]: Started libpod-conmon-65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb.scope.
Nov 23 15:42:31 np0005532761 systemd[1]: Started libpod-conmon-34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe.scope.
Nov 23 15:42:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ecb773a39069930b0eb062579a79a5600f8102c92a49d7762e10456d2d12a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ecb773a39069930b0eb062579a79a5600f8102c92a49d7762e10456d2d12a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:31 np0005532761 podman[94726]: 2025-11-23 20:42:31.261595656 +0000 UTC m=+0.089292505 container init 34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe (image=quay.io/ceph/ceph:v19, name=quizzical_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 15:42:31 np0005532761 podman[94719]: 2025-11-23 20:42:31.263699051 +0000 UTC m=+0.109899925 container init 65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kepler, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:31 np0005532761 podman[94726]: 2025-11-23 20:42:31.268885987 +0000 UTC m=+0.096582806 container start 34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe (image=quay.io/ceph/ceph:v19, name=quizzical_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:31 np0005532761 podman[94719]: 2025-11-23 20:42:31.269229036 +0000 UTC m=+0.115429870 container start 65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:42:31 np0005532761 happy_kepler[94750]: 167 167
Nov 23 15:42:31 np0005532761 systemd[1]: libpod-65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb.scope: Deactivated successfully.
Nov 23 15:42:31 np0005532761 podman[94719]: 2025-11-23 20:42:31.17716374 +0000 UTC m=+0.023364594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:31 np0005532761 podman[94726]: 2025-11-23 20:42:31.273116108 +0000 UTC m=+0.100812957 container attach 34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe (image=quay.io/ceph/ceph:v19, name=quizzical_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:31 np0005532761 podman[94719]: 2025-11-23 20:42:31.276312692 +0000 UTC m=+0.122513526 container attach 65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:42:31 np0005532761 podman[94719]: 2025-11-23 20:42:31.276513697 +0000 UTC m=+0.122714531 container died 65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kepler, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:31 np0005532761 podman[94726]: 2025-11-23 20:42:31.190633293 +0000 UTC m=+0.018330142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-469e5511f0e8b1cdfa8d41beacc31298230170463d5e0a5e6e3f21e6a0758a56-merged.mount: Deactivated successfully.
Nov 23 15:42:31 np0005532761 podman[94719]: 2025-11-23 20:42:31.313238921 +0000 UTC m=+0.159439755 container remove 65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:31 np0005532761 systemd[1]: libpod-conmon-65fd70dd525b632f7ecac67c185226235e2eacf9253287c29055af2d001f52fb.scope: Deactivated successfully.
Nov 23 15:42:31 np0005532761 podman[94796]: 2025-11-23 20:42:31.475753797 +0000 UTC m=+0.052798247 container create 52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:31 np0005532761 systemd[1]: Started libpod-conmon-52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23.scope.
Nov 23 15:42:31 np0005532761 podman[94796]: 2025-11-23 20:42:31.444168587 +0000 UTC m=+0.021213057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085369e067d64bf6b644392c312572c3620dec2570dcb1cd518fa1f9984c118/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085369e067d64bf6b644392c312572c3620dec2570dcb1cd518fa1f9984c118/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085369e067d64bf6b644392c312572c3620dec2570dcb1cd518fa1f9984c118/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085369e067d64bf6b644392c312572c3620dec2570dcb1cd518fa1f9984c118/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:31 np0005532761 podman[94796]: 2025-11-23 20:42:31.57115242 +0000 UTC m=+0.148196890 container init 52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_darwin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 15:42:31 np0005532761 podman[94796]: 2025-11-23 20:42:31.576224054 +0000 UTC m=+0.153268504 container start 52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:31 np0005532761 podman[94796]: 2025-11-23 20:42:31.580098205 +0000 UTC m=+0.157142645 container attach 52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:31 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 15:42:31 np0005532761 quizzical_lehmann[94752]: 
Nov 23 15:42:31 np0005532761 quizzical_lehmann[94752]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 23 15:42:31 np0005532761 systemd[1]: libpod-34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe.scope: Deactivated successfully.
Nov 23 15:42:31 np0005532761 podman[94726]: 2025-11-23 20:42:31.64470298 +0000 UTC m=+0.472399799 container died 34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe (image=quay.io/ceph/ceph:v19, name=quizzical_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 15:42:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a4ecb773a39069930b0eb062579a79a5600f8102c92a49d7762e10456d2d12a2-merged.mount: Deactivated successfully.
Nov 23 15:42:31 np0005532761 podman[94726]: 2025-11-23 20:42:31.682529183 +0000 UTC m=+0.510226002 container remove 34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe (image=quay.io/ceph/ceph:v19, name=quizzical_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 15:42:31 np0005532761 systemd[1]: libpod-conmon-34020e0e7b42788ab365d98f90f734dd94d2d7a77d0dc0d95701ac057a9e59fe.scope: Deactivated successfully.
Nov 23 15:42:32 np0005532761 lvm[94900]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:42:32 np0005532761 lvm[94900]: VG ceph_vg0 finished
Nov 23 15:42:32 np0005532761 funny_darwin[94813]: {}
Nov 23 15:42:32 np0005532761 systemd[1]: libpod-52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23.scope: Deactivated successfully.
Nov 23 15:42:32 np0005532761 systemd[1]: libpod-52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23.scope: Consumed 1.002s CPU time.
Nov 23 15:42:32 np0005532761 podman[94796]: 2025-11-23 20:42:32.257965535 +0000 UTC m=+0.835009985 container died 52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:42:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b085369e067d64bf6b644392c312572c3620dec2570dcb1cd518fa1f9984c118-merged.mount: Deactivated successfully.
Nov 23 15:42:32 np0005532761 podman[94796]: 2025-11-23 20:42:32.299716001 +0000 UTC m=+0.876760451 container remove 52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 15:42:32 np0005532761 systemd[1]: libpod-conmon-52960e9c13fea09400fbc7c1aa0244507db504562bbc6369beef58894cfced23.scope: Deactivated successfully.
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:32 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 0af40849-959b-4d66-b650-e2f8132fbaaf (Updating rgw.rgw deployment (+3 -> 3))
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:32 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.cwocqr on compute-2
Nov 23 15:42:32 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.cwocqr on compute-2
Nov 23 15:42:32 np0005532761 ansible-async_wrapper.py[94030]: Done in kid B.
Nov 23 15:42:32 np0005532761 python3[94940]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:32 np0005532761 podman[94941]: 2025-11-23 20:42:32.782966644 +0000 UTC m=+0.018227919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v16: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:32 np0005532761 podman[94941]: 2025-11-23 20:42:32.95463691 +0000 UTC m=+0.189898165 container create 21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4 (image=quay.io/ceph/ceph:v19, name=boring_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:32 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 11 completed events
Nov 23 15:42:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532761 systemd[1]: Started libpod-conmon-21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4.scope.
Nov 23 15:42:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6933fc7faa29281fa07a66d97932d9d658fe27fbe08c3d79ef3b19f8dfff916/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6933fc7faa29281fa07a66d97932d9d658fe27fbe08c3d79ef3b19f8dfff916/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:33 np0005532761 podman[94941]: 2025-11-23 20:42:33.127656812 +0000 UTC m=+0.362918167 container init 21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4 (image=quay.io/ceph/ceph:v19, name=boring_knuth, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:42:33 np0005532761 podman[94941]: 2025-11-23 20:42:33.139112102 +0000 UTC m=+0.374373387 container start 21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4 (image=quay.io/ceph/ceph:v19, name=boring_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:33 np0005532761 podman[94941]: 2025-11-23 20:42:33.245473684 +0000 UTC m=+0.480734979 container attach 21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4 (image=quay.io/ceph/ceph:v19, name=boring_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:42:33 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 15:42:33 np0005532761 boring_knuth[94954]: 
Nov 23 15:42:33 np0005532761 boring_knuth[94954]: [{"container_id": "54fdcc7d7f6d", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.11%", "created": "2025-11-23T20:39:36.220265Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-23T20:42:15.827423Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-11-23T20:39:36.010761Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@crash.compute-0", "version": "19.2.3"}, {"container_id": "e0f32b933903", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.39%", "created": "2025-11-23T20:40:20.281514Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-23T20:42:15.731718Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2025-11-23T20:40:20.201617Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@crash.compute-1", "version": "19.2.3"}, {"container_id": "4ad194abaacb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.40%", "created": "2025-11-23T20:41:22.081886Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-23T20:42:15.157324Z", "memory_usage": 7803502, "ports": [], "service_name": "crash", "started": "2025-11-23T20:41:21.952022Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@crash.compute-2", "version": "19.2.3"}, {"container_id": "47b4a98cc84d", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "26.90%", "created": "2025-11-23T20:39:00.955408Z", "daemon_id": "compute-0.oyehye", "daemon_name": "mgr.compute-0.oyehye", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-23T20:42:15.827319Z", "memory_usage": 542533222, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-23T20:39:00.834169Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@mgr.compute-0.oyehye", "version": "19.2.3"}, {"container_id": "7db62be7e181", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "46.33%", "created": "2025-11-23T20:41:19.859192Z", "daemon_id": "compute-1.kgyerp", "daemon_name": "mgr.compute-1.kgyerp", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-23T20:42:15.732280Z", "memory_usage": 503631052, "ports": [8765], "service_name": "mgr", "started": "2025-11-23T20:41:19.768009Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@mgr.compute-1.kgyerp", "version": "19.2.3"}, {"container_id": "21c1b17ca817", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "38.25%", "created": "2025-11-23T20:41:11.613591Z", "daemon_id": "compute-2.jtkauz", "daemon_name": "mgr.compute-2.jtkauz", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-23T20:42:15.157224Z", "memory_usage": 505413632, "ports": [8765], "service_name": "mgr", "started": "2025-11-23T20:41:11.480081Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@mgr.compute-2.jtkauz", "version": "19.2.3"}, {"container_id": "9716c164d9b8", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.42%", "created": "2025-11-23T20:38:56.311287Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-23T20:42:15.827148Z", "memory_request": 2147483648, "memory_usage": 56287559, "ports": [], "service_name": "mon", "started": "2025-11-23T20:38:58.545280Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@mon.compute-0", "version": "19.2.3"}, {"container_id": "ec83ddfeced6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.80%", "created": "2025-11-23T20:41:06.090799Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-23T20:42:15.732127Z", "memory_request": 2147483648, "memory_usage": 46430945, "ports": [], "service_name": "mon", "started": "2025-11-23T20:41:05.947708Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@mon.compute-1", "version": "19.2.3"}, {"container_id": "3d9e8671bf70", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.71%", "created": "2025-11-23T20:41:04.248383Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-23T20:42:15.157091Z", "memory_request": 2147483648, "memory_usage": 42184212, "ports": [], "service_name": "mon", "started": "2025-11-23T20:41:04.106893Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@mon.compute-2", "version": "19.2.3"}, {"daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "events": ["2025-11-23T20:42:22.753721Z daemon:node-exporter.compu
Nov 23 15:42:33 np0005532761 systemd[1]: libpod-21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4.scope: Deactivated successfully.
Nov 23 15:42:33 np0005532761 podman[94979]: 2025-11-23 20:42:33.57912362 +0000 UTC m=+0.034735923 container died 21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4 (image=quay.io/ceph/ceph:v19, name=boring_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: Deploying daemon rgw.rgw.compute-2.cwocqr on compute-2
Nov 23 15:42:33 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532761 rsyslogd[1006]: message too long (11754) with configured size 8096, begin of message is: [{"container_id": "54fdcc7d7f6d", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 23 15:42:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e6933fc7faa29281fa07a66d97932d9d658fe27fbe08c3d79ef3b19f8dfff916-merged.mount: Deactivated successfully.
Nov 23 15:42:34 np0005532761 podman[94979]: 2025-11-23 20:42:34.043444706 +0000 UTC m=+0.499057019 container remove 21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4 (image=quay.io/ceph/ceph:v19, name=boring_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:34 np0005532761 systemd[1]: libpod-conmon-21484004728dfcd604de9970337925a853bd44ccc8d3b25ce0479477c64ed4a4.scope: Deactivated successfully.
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:34 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.exwrda on compute-1
Nov 23 15:42:34 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.exwrda on compute-1
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:34 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v17: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:34 np0005532761 python3[95022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:35 np0005532761 podman[95023]: 2025-11-23 20:42:35.022722818 +0000 UTC m=+0.026425065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:35 np0005532761 podman[95023]: 2025-11-23 20:42:35.162725102 +0000 UTC m=+0.166427349 container create 5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6 (image=quay.io/ceph/ceph:v19, name=relaxed_poitras, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:35 np0005532761 systemd[1]: Started libpod-conmon-5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6.scope.
Nov 23 15:42:35 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd40c0359bc60f44601e8c6957755163c79f1eb1e62f4a2e349381d276f672ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd40c0359bc60f44601e8c6957755163c79f1eb1e62f4a2e349381d276f672ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:35 np0005532761 podman[95023]: 2025-11-23 20:42:35.385420717 +0000 UTC m=+0.389122974 container init 5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6 (image=quay.io/ceph/ceph:v19, name=relaxed_poitras, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:42:35 np0005532761 podman[95023]: 2025-11-23 20:42:35.391682232 +0000 UTC m=+0.395384459 container start 5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6 (image=quay.io/ceph/ceph:v19, name=relaxed_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:35 np0005532761 podman[95023]: 2025-11-23 20:42:35.435518652 +0000 UTC m=+0.439220929 container attach 5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6 (image=quay.io/ceph/ceph:v19, name=relaxed_poitras, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 23 15:42:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 42 pg[9.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: Deploying daemon rgw.rgw.compute-1.exwrda on compute-1
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.102:0/1418789177' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 15:42:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010806180' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 15:42:35 np0005532761 relaxed_poitras[95038]: 
Nov 23 15:42:35 np0005532761 relaxed_poitras[95038]: {"fsid":"03808be8-ae4a-5548-82e6-4a294f1bc627","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":80,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1763930505,"num_in_osds":3,"osd_in_since":1763930484,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84246528,"bytes_avail":64327680000,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-11-23T20:42:15:389935+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":2,"modified":"2025-11-23T20:40:19.668770+0000","services":{}},"progress_events":{"0af40849-959b-4d66-b650-e2f8132fbaaf":{"message":"Updating rgw.rgw deployment (+3 -> 3) (2s)\n      [=========...................] (remaining: 4s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Nov 23 15:42:35 np0005532761 systemd[1]: libpod-5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6.scope: Deactivated successfully.
Nov 23 15:42:35 np0005532761 conmon[95038]: conmon 5df805107231170b7bb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6.scope/container/memory.events
Nov 23 15:42:35 np0005532761 podman[95023]: 2025-11-23 20:42:35.897898217 +0000 UTC m=+0.901600454 container died 5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6 (image=quay.io/ceph/ceph:v19, name=relaxed_poitras, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 15:42:35 np0005532761 systemd[1]: var-lib-containers-storage-overlay-dd40c0359bc60f44601e8c6957755163c79f1eb1e62f4a2e349381d276f672ab-merged.mount: Deactivated successfully.
Nov 23 15:42:35 np0005532761 podman[95023]: 2025-11-23 20:42:35.950714414 +0000 UTC m=+0.954416641 container remove 5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6 (image=quay.io/ceph/ceph:v19, name=relaxed_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:35 np0005532761 systemd[1]: libpod-conmon-5df805107231170b7bb3995242668d457bc37907f6df81008d9f1532786fe6d6.scope: Deactivated successfully.
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:36 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.lntkpb on compute-0
Nov 23 15:42:36 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.lntkpb on compute-0
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 23 15:42:36 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 43 pg[9.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 23 15:42:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v20: 133 pgs: 1 unknown, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:37 np0005532761 podman[95195]: 2025-11-23 20:42:37.096471724 +0000 UTC m=+0.064311329 container create 516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 15:42:37 np0005532761 python3[95181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:37 np0005532761 systemd[1]: Started libpod-conmon-516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781.scope.
Nov 23 15:42:37 np0005532761 podman[95195]: 2025-11-23 20:42:37.056476535 +0000 UTC m=+0.024316160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:37 np0005532761 podman[95210]: 2025-11-23 20:42:37.178361524 +0000 UTC m=+0.040772001 container create 6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088 (image=quay.io/ceph/ceph:v19, name=sad_fermat, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 15:42:37 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:37 np0005532761 podman[95195]: 2025-11-23 20:42:37.200077703 +0000 UTC m=+0.167917308 container init 516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_roentgen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:37 np0005532761 podman[95195]: 2025-11-23 20:42:37.205763262 +0000 UTC m=+0.173602867 container start 516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:37 np0005532761 podman[95195]: 2025-11-23 20:42:37.208521725 +0000 UTC m=+0.176361330 container attach 516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:37 np0005532761 eloquent_roentgen[95222]: 167 167
Nov 23 15:42:37 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:42:37 np0005532761 podman[95195]: 2025-11-23 20:42:37.212360736 +0000 UTC m=+0.180200341 container died 516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_roentgen, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:37 np0005532761 systemd[1]: Started libpod-conmon-6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088.scope.
Nov 23 15:42:37 np0005532761 systemd[1]: libpod-516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781.scope: Deactivated successfully.
Nov 23 15:42:37 np0005532761 podman[95210]: 2025-11-23 20:42:37.161956093 +0000 UTC m=+0.024366590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:37 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:37 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a694c9eb5255dfe157c41d8defef6575acf43b54f3e679980083468ecfb98cb0-merged.mount: Deactivated successfully.
Nov 23 15:42:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cafd1331080da0ea1856f06fdfc41ddfb2495777a9f508da58a0e39be5b1fba2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cafd1331080da0ea1856f06fdfc41ddfb2495777a9f508da58a0e39be5b1fba2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:37 np0005532761 podman[95210]: 2025-11-23 20:42:37.289213893 +0000 UTC m=+0.151624410 container init 6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088 (image=quay.io/ceph/ceph:v19, name=sad_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:42:37 np0005532761 podman[95210]: 2025-11-23 20:42:37.295174359 +0000 UTC m=+0.157584846 container start 6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088 (image=quay.io/ceph/ceph:v19, name=sad_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:42:37 np0005532761 podman[95210]: 2025-11-23 20:42:37.308262132 +0000 UTC m=+0.170672649 container attach 6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088 (image=quay.io/ceph/ceph:v19, name=sad_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:37 np0005532761 podman[95195]: 2025-11-23 20:42:37.328016541 +0000 UTC m=+0.295856146 container remove 516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_roentgen, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:37 np0005532761 systemd[1]: libpod-conmon-516bc83daee63d9263e91d156a954fa24f663e31ee9ec3cc2a6a5036a0f0e781.scope: Deactivated successfully.
Nov 23 15:42:37 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:37 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408235100' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 23 15:42:37 np0005532761 sad_fermat[95234]: 
Nov 23 15:42:37 np0005532761 sad_fermat[95234]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.oyehye/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.kgyerp/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.jtkauz/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502926848","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.lntkpb","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.exwrda","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.cwocqr","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 23 15:42:37 np0005532761 podman[95210]: 2025-11-23 20:42:37.755229263 +0000 UTC m=+0.617639740 container died 6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088 (image=quay.io/ceph/ceph:v19, name=sad_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:37 np0005532761 systemd[1]: libpod-6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088.scope: Deactivated successfully.
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: Deploying daemon rgw.rgw.compute-0.lntkpb on compute-0
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:37 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:37 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:38 np0005532761 ceph-mgr[74869]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Nov 23 15:42:38 np0005532761 systemd[1]: var-lib-containers-storage-overlay-cafd1331080da0ea1856f06fdfc41ddfb2495777a9f508da58a0e39be5b1fba2-merged.mount: Deactivated successfully.
Nov 23 15:42:38 np0005532761 podman[95210]: 2025-11-23 20:42:38.119536015 +0000 UTC m=+0.981946492 container remove 6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088 (image=quay.io/ceph/ceph:v19, name=sad_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:42:38 np0005532761 systemd[1]: libpod-conmon-6b016660473bb19d8420324ced3ee309b58b180a7b96468c36229a1d966b1088.scope: Deactivated successfully.
Nov 23 15:42:38 np0005532761 systemd[1]: Starting Ceph rgw.rgw.compute-0.lntkpb for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:38 np0005532761 podman[95411]: 2025-11-23 20:42:38.356210596 +0000 UTC m=+0.050691771 container create 1a7389319240c6ead5c1fafe3d563012e524c217998ddc77015b30cadfd03191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-0-lntkpb, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:42:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7c277cb54870dacba0f7b764f9eefafd375ae933757f1ec30e83826f11b904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7c277cb54870dacba0f7b764f9eefafd375ae933757f1ec30e83826f11b904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7c277cb54870dacba0f7b764f9eefafd375ae933757f1ec30e83826f11b904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7c277cb54870dacba0f7b764f9eefafd375ae933757f1ec30e83826f11b904/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.lntkpb supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:38 np0005532761 podman[95411]: 2025-11-23 20:42:38.325859839 +0000 UTC m=+0.020341034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:38 np0005532761 podman[95411]: 2025-11-23 20:42:38.422368502 +0000 UTC m=+0.116849687 container init 1a7389319240c6ead5c1fafe3d563012e524c217998ddc77015b30cadfd03191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-0-lntkpb, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 15:42:38 np0005532761 podman[95411]: 2025-11-23 20:42:38.427338663 +0000 UTC m=+0.121819828 container start 1a7389319240c6ead5c1fafe3d563012e524c217998ddc77015b30cadfd03191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-0-lntkpb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:38 np0005532761 radosgw[95430]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:42:38 np0005532761 radosgw[95430]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 23 15:42:38 np0005532761 radosgw[95430]: framework: beast
Nov 23 15:42:38 np0005532761 radosgw[95430]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 23 15:42:38 np0005532761 radosgw[95430]: init_numa not setting numa affinity
Nov 23 15:42:38 np0005532761 bash[95411]: 1a7389319240c6ead5c1fafe3d563012e524c217998ddc77015b30cadfd03191
Nov 23 15:42:38 np0005532761 systemd[1]: Started Ceph rgw.rgw.compute-0.lntkpb for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:38 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 0af40849-959b-4d66-b650-e2f8132fbaaf (Updating rgw.rgw deployment (+3 -> 3))
Nov 23 15:42:38 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 0af40849-959b-4d66-b650-e2f8132fbaaf (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Nov 23 15:42:38 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:38 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 23 15:42:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v22: 134 pgs: 2 unknown, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:39 np0005532761 python3[96044]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 23 15:42:39 np0005532761 podman[96045]: 2025-11-23 20:42:39.116432549 +0000 UTC m=+0.047516859 container create aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319 (image=quay.io/ceph/ceph:v19, name=stupefied_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev d93e0f15-a797-43f4-aedc-56cd3387333c (Updating mds.cephfs deployment (+3 -> 3))
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:39 np0005532761 systemd[1]: Started libpod-conmon-aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319.scope.
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:39 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.utubtn on compute-2
Nov 23 15:42:39 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.utubtn on compute-2
Nov 23 15:42:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:39 np0005532761 podman[96045]: 2025-11-23 20:42:39.095049328 +0000 UTC m=+0.026133638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b152a0247132f95f3440ad0660e6dd5908f9a4c80ffa59b6a3532d2b5d5a82a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b152a0247132f95f3440ad0660e6dd5908f9a4c80ffa59b6a3532d2b5d5a82a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:39 np0005532761 podman[96045]: 2025-11-23 20:42:39.230386679 +0000 UTC m=+0.161470979 container init aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319 (image=quay.io/ceph/ceph:v19, name=stupefied_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:39 np0005532761 podman[96045]: 2025-11-23 20:42:39.238154093 +0000 UTC m=+0.169238383 container start aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319 (image=quay.io/ceph/ceph:v19, name=stupefied_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:39 np0005532761 podman[96045]: 2025-11-23 20:42:39.2609166 +0000 UTC m=+0.192000890 container attach aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319 (image=quay.io/ceph/ceph:v19, name=stupefied_antonelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456864184' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 23 15:42:39 np0005532761 stupefied_antonelli[96064]: mimic
Nov 23 15:42:39 np0005532761 systemd[1]: libpod-aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319.scope: Deactivated successfully.
Nov 23 15:42:39 np0005532761 podman[96045]: 2025-11-23 20:42:39.606276395 +0000 UTC m=+0.537360695 container died aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319 (image=quay.io/ceph/ceph:v19, name=stupefied_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: Deploying daemon mds.cephfs.compute-2.utubtn on compute-2
Nov 23 15:42:39 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:42:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b152a0247132f95f3440ad0660e6dd5908f9a4c80ffa59b6a3532d2b5d5a82a0-merged.mount: Deactivated successfully.
Nov 23 15:42:39 np0005532761 podman[96045]: 2025-11-23 20:42:39.805333919 +0000 UTC m=+0.736418209 container remove aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319 (image=quay.io/ceph/ceph:v19, name=stupefied_antonelli, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:39 np0005532761 systemd[1]: libpod-conmon-aba9157e94e158f2a9c041dcd9bbc4c219ad86e83a5d503a7de9b3d1aa46a319.scope: Deactivated successfully.
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 23 15:42:40 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 46 pg[11.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532761 python3[96127]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:42:40 np0005532761 podman[96130]: 2025-11-23 20:42:40.839469359 +0000 UTC m=+0.035025500 container create 830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5 (image=quay.io/ceph/ceph:v19, name=romantic_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:40 np0005532761 systemd[1]: Started libpod-conmon-830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5.scope.
Nov 23 15:42:40 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e23ad464bc5c587130146148febaaeb857b7f13f406f10ff749174d6b2d5a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e23ad464bc5c587130146148febaaeb857b7f13f406f10ff749174d6b2d5a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:40 np0005532761 podman[96130]: 2025-11-23 20:42:40.901488148 +0000 UTC m=+0.097044329 container init 830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5 (image=quay.io/ceph/ceph:v19, name=romantic_boyd, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:40 np0005532761 podman[96130]: 2025-11-23 20:42:40.910526195 +0000 UTC m=+0.106082336 container start 830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5 (image=quay.io/ceph/ceph:v19, name=romantic_boyd, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:42:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v25: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.9 KiB/s wr, 10 op/s
Nov 23 15:42:40 np0005532761 podman[96130]: 2025-11-23 20:42:40.915997958 +0000 UTC m=+0.111554129 container attach 830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5 (image=quay.io/ceph/ceph:v19, name=romantic_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:42:40 np0005532761 podman[96130]: 2025-11-23 20:42:40.824928948 +0000 UTC m=+0.020485119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:42:41 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 47 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421589308' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:41 np0005532761 romantic_boyd[96145]: 
Nov 23 15:42:41 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.jcbopz on compute-0
Nov 23 15:42:41 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.jcbopz on compute-0
Nov 23 15:42:41 np0005532761 romantic_boyd[96145]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":1},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":10}}
Nov 23 15:42:41 np0005532761 systemd[1]: libpod-830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5.scope: Deactivated successfully.
Nov 23 15:42:41 np0005532761 podman[96130]: 2025-11-23 20:42:41.34544716 +0000 UTC m=+0.541003301 container died 830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5 (image=quay.io/ceph/ceph:v19, name=romantic_boyd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:42:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a5e23ad464bc5c587130146148febaaeb857b7f13f406f10ff749174d6b2d5a0-merged.mount: Deactivated successfully.
Nov 23 15:42:41 np0005532761 podman[96130]: 2025-11-23 20:42:41.396951912 +0000 UTC m=+0.592508063 container remove 830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5 (image=quay.io/ceph/ceph:v19, name=romantic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:42:41 np0005532761 systemd[1]: libpod-conmon-830a72e875aaae358d7648630307d4e116e63f985fea5e7b91f92f223b159ba5.scope: Deactivated successfully.
Nov 23 15:42:41 np0005532761 podman[96280]: 2025-11-23 20:42:41.794593108 +0000 UTC m=+0.032885995 container create 3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_cray, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:41 np0005532761 systemd[1]: Started libpod-conmon-3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a.scope.
Nov 23 15:42:41 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:41 np0005532761 podman[96280]: 2025-11-23 20:42:41.85793246 +0000 UTC m=+0.096225387 container init 3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:41 np0005532761 podman[96280]: 2025-11-23 20:42:41.863481276 +0000 UTC m=+0.101774173 container start 3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:41 np0005532761 podman[96280]: 2025-11-23 20:42:41.86710182 +0000 UTC m=+0.105394707 container attach 3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:41 np0005532761 exciting_cray[96296]: 167 167
Nov 23 15:42:41 np0005532761 systemd[1]: libpod-3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a.scope: Deactivated successfully.
Nov 23 15:42:41 np0005532761 podman[96280]: 2025-11-23 20:42:41.868962349 +0000 UTC m=+0.107255236 container died 3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_cray, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:41 np0005532761 podman[96280]: 2025-11-23 20:42:41.781048302 +0000 UTC m=+0.019341209 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1255ab4a129b5bc93c80ca1c1ce8d52a589b86140d0b1b72b84318eea7cf2e05-merged.mount: Deactivated successfully.
Nov 23 15:42:41 np0005532761 podman[96280]: 2025-11-23 20:42:41.903269379 +0000 UTC m=+0.141562266 container remove 3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:41 np0005532761 systemd[1]: libpod-conmon-3ad26ac3235f253e25986cefae30ed2bccdca0af020dd3fc0e6139251be8bc3a.scope: Deactivated successfully.
Nov 23 15:42:41 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:42 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:42 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:42 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: Deploying daemon mds.cephfs.compute-0.jcbopz on compute-0
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:42 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:42 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e3 new map
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-11-23T20:42:42:276651+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:15.389822+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.utubtn{-1:24181} state up:standby seq 1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] up:boot
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] as mds.0
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.utubtn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.utubtn"} v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.utubtn"}]: dispatch
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e3 all = 0
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e4 new map
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-11-23T20:42:42:291982+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:42.291972+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.utubtn{0:24181} state up:creating seq 1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:creating}
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.utubtn is now active in filesystem cephfs as rank 0
Nov 23 15:42:42 np0005532761 systemd[1]: Starting Ceph mds.cephfs.compute-0.jcbopz for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:42 np0005532761 podman[96438]: 2025-11-23 20:42:42.679550504 +0000 UTC m=+0.036331345 container create 8ff2bda8b53c3fcd4aff9042fce74127261986022bdb388655ab87fcd8af31a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-0-jcbopz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 15:42:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcdd5430aa692752218f8dcb396bc8f1a18da056c1536451ae939f8aff69c8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcdd5430aa692752218f8dcb396bc8f1a18da056c1536451ae939f8aff69c8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcdd5430aa692752218f8dcb396bc8f1a18da056c1536451ae939f8aff69c8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcdd5430aa692752218f8dcb396bc8f1a18da056c1536451ae939f8aff69c8e/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.jcbopz supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:42 np0005532761 podman[96438]: 2025-11-23 20:42:42.737016232 +0000 UTC m=+0.093797113 container init 8ff2bda8b53c3fcd4aff9042fce74127261986022bdb388655ab87fcd8af31a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-0-jcbopz, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:42 np0005532761 podman[96438]: 2025-11-23 20:42:42.744505418 +0000 UTC m=+0.101286269 container start 8ff2bda8b53c3fcd4aff9042fce74127261986022bdb388655ab87fcd8af31a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-0-jcbopz, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:42:42 np0005532761 bash[96438]: 8ff2bda8b53c3fcd4aff9042fce74127261986022bdb388655ab87fcd8af31a7
Nov 23 15:42:42 np0005532761 podman[96438]: 2025-11-23 20:42:42.66494643 +0000 UTC m=+0.021727301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:42 np0005532761 systemd[1]: Started Ceph mds.cephfs.compute-0.jcbopz for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:42 np0005532761 ceph-mds[96457]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:42:42 np0005532761 ceph-mds[96457]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 23 15:42:42 np0005532761 ceph-mds[96457]: main not setting numa affinity
Nov 23 15:42:42 np0005532761 ceph-mds[96457]: pidfile_write: ignore empty --pid-file
Nov 23 15:42:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-0-jcbopz[96453]: starting mds.cephfs.compute-0.jcbopz at 
Nov 23 15:42:42 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Updating MDS map to version 4 from mon.0
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:42 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.gmfhnm on compute-1
Nov 23 15:42:42 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.gmfhnm on compute-1
Nov 23 15:42:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v28: 136 pgs: 1 unknown, 1 creating+peering, 134 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Nov 23 15:42:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:42:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:42:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:42:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:42:43 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:42:43 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:42:43 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 12 completed events
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: daemon mds.cephfs.compute-2.utubtn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: daemon mds.cephfs.compute-2.utubtn is now active in filesystem cephfs as rank 0
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e5 new map
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-11-23T20:42:43:300630+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:43.300628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:43 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Updating MDS map to version 5 from mon.0
Nov 23 15:42:43 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Monitors have assigned me to become a standby
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] up:active
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] up:boot
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 1 up:standby
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.jcbopz"} v 0)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jcbopz"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e5 all = 0
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e6 new map
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-11-23T20:42:43:320643+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:43.300628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:43 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 1 up:standby
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: Deploying daemon mds.cephfs.compute-1.gmfhnm on compute-1
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev d93e0f15-a797-43f4-aedc-56cd3387333c (Updating mds.cephfs deployment (+3 -> 3))
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event d93e0f15-a797-43f4-aedc-56cd3387333c (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev c7e1cad9-b30a-4887-b93f-2e90460faee8 (Updating nfs.cephfs deployment (+3 -> 3))
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:44 np0005532761 radosgw[95430]: v1 topic migration: starting v1 topic migration..
Nov 23 15:42:44 np0005532761 radosgw[95430]: LDAP not started since no server URIs were provided in the configuration.
Nov 23 15:42:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-0-lntkpb[95426]: 2025-11-23T20:42:44.569+0000 7f1cd5ae5980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:44 np0005532761 radosgw[95430]: v1 topic migration: finished v1 topic migration
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 radosgw[95430]: framework: beast
Nov 23 15:42:44 np0005532761 radosgw[95430]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 23 15:42:44 np0005532761 radosgw[95430]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 radosgw[95430]: starting handler: beast
Nov 23 15:42:44 np0005532761 radosgw[95430]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:42:44 np0005532761 radosgw[95430]: mgrc service_daemon_register rgw.14562 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.lntkpb,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=7b74c4d0-333d-4a78-943d-fd3c4abdfa87,zone_name=default,zonegroup_id=3560ca63-18fc-44aa-8d4c-f5d89c554a9f,zonegroup_name=default}
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha-rgw
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha-rgw
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.fuxuha's ganesha conf is defaulting to empty
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.fuxuha's ganesha conf is defaulting to empty
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v31: 136 pgs: 1 unknown, 1 creating+peering, 134 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e7 new map
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-11-23T20:42:45:339402+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:43.300628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] up:boot
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.gmfhnm"} v 0)
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gmfhnm"}]: dispatch
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e7 all = 0
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 15:42:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha-rgw
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: Bind address in nfs.cephfs.0.0.compute-1.fuxuha's ganesha conf is defaulting to empty
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: Deploying daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: Cluster is now healthy
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:46 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw
Nov 23 15:42:46 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:46 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 23 15:42:46 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e8 new map
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-11-23T20:42:46:698669+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:46.341150+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] up:active
Nov 23 15:42:46 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 15:42:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v32: 136 pgs: 136 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 8.8 KiB/s wr, 274 op/s
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e9 new map
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-11-23T20:42:47:710992+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:46.341150+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:47 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Updating MDS map to version 9 from mon.0
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] up:standby
Nov 23 15:42:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 15:42:48 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 13 completed events
Nov 23 15:42:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:42:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:48 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 92530bcd-3c04-4cb8-a8ac-5e115a2c7d47 (Global Recovery Event) in 10 seconds
Nov 23 15:42:48 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v33: 136 pgs: 136 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 7.5 KiB/s wr, 233 op/s
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e10 new map
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2025-11-23T20:42:49:046556+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:46.341150+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] up:standby
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw-rgw
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw-rgw
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.dqbktw's ganesha conf is defaulting to empty
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.dqbktw's ganesha conf is defaulting to empty
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.dqbktw on compute-2
Nov 23 15:42:49 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.dqbktw on compute-2
Nov 23 15:42:50 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:50 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:50 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:50 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 7.1 KiB/s wr, 284 op/s
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw-rgw
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: Bind address in nfs.cephfs.1.0.compute-2.dqbktw's ganesha conf is defaulting to empty
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: Deploying daemon nfs.cephfs.1.0.compute-2.dqbktw on compute-2
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:51 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy
Nov 23 15:42:51 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:51 np0005532761 ceph-mgr[74869]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 23 15:42:51 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:52 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:52 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:52 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:52 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:52 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:52 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:52 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 5.9 KiB/s wr, 237 op/s
Nov 23 15:42:53 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 14 completed events
Nov 23 15:42:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:42:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:53 np0005532761 ceph-mon[74569]: Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy
Nov 23 15:42:53 np0005532761 ceph-mon[74569]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 23 15:42:53 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.4 KiB/s wr, 215 op/s
Nov 23 15:42:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 23 15:42:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy-rgw
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy-rgw
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.bfglcy's ganesha conf is defaulting to empty
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.bfglcy's ganesha conf is defaulting to empty
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.bfglcy on compute-0
Nov 23 15:42:55 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.bfglcy on compute-0
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:55 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:55 np0005532761 podman[96713]: 2025-11-23 20:42:55.551989405 +0000 UTC m=+0.035306027 container create c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:42:55 np0005532761 systemd[1]: Started libpod-conmon-c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f.scope.
Nov 23 15:42:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:42:55 np0005532761 podman[96713]: 2025-11-23 20:42:55.536861169 +0000 UTC m=+0.020177811 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:55 np0005532761 podman[96713]: 2025-11-23 20:42:55.637433968 +0000 UTC m=+0.120750610 container init c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_shamir, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 23 15:42:55 np0005532761 podman[96713]: 2025-11-23 20:42:55.643490197 +0000 UTC m=+0.126806819 container start c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 15:42:55 np0005532761 jolly_shamir[96729]: 167 167
Nov 23 15:42:55 np0005532761 systemd[1]: libpod-c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f.scope: Deactivated successfully.
Nov 23 15:42:55 np0005532761 conmon[96729]: conmon c4f5ac5843a44826dbe6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f.scope/container/memory.events
Nov 23 15:42:55 np0005532761 podman[96713]: 2025-11-23 20:42:55.66721803 +0000 UTC m=+0.150534652 container attach c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_shamir, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:55 np0005532761 podman[96713]: 2025-11-23 20:42:55.667492247 +0000 UTC m=+0.150808879 container died c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_shamir, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 15:42:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-fa62f55a146040da83e1a38bd4fbdf6564dc1933f1dceb341fc0a2b1e6c76301-merged.mount: Deactivated successfully.
Nov 23 15:42:56 np0005532761 ceph-mon[74569]: Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:56 np0005532761 ceph-mon[74569]: Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy-rgw
Nov 23 15:42:56 np0005532761 ceph-mon[74569]: Bind address in nfs.cephfs.2.0.compute-0.bfglcy's ganesha conf is defaulting to empty
Nov 23 15:42:56 np0005532761 ceph-mon[74569]: Deploying daemon nfs.cephfs.2.0.compute-0.bfglcy on compute-0
Nov 23 15:42:56 np0005532761 podman[96713]: 2025-11-23 20:42:56.180740247 +0000 UTC m=+0.664056869 container remove c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 15:42:56 np0005532761 systemd[1]: libpod-conmon-c4f5ac5843a44826dbe617cce3f93982dfb87b10241928e944f912063a7d944f.scope: Deactivated successfully.
Nov 23 15:42:56 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:56 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:56 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:56 np0005532761 systemd[1]: Reloading.
Nov 23 15:42:56 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:56 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:56 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.6 KiB/s wr, 192 op/s
Nov 23 15:42:56 np0005532761 podman[96872]: 2025-11-23 20:42:56.994928816 +0000 UTC m=+0.034619169 container create 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e964154f11149fdb6e3d9d01fa9ea9fef089c0a8b3facb87f2a861a4c64117/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e964154f11149fdb6e3d9d01fa9ea9fef089c0a8b3facb87f2a861a4c64117/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e964154f11149fdb6e3d9d01fa9ea9fef089c0a8b3facb87f2a861a4c64117/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e964154f11149fdb6e3d9d01fa9ea9fef089c0a8b3facb87f2a861a4c64117/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:57 np0005532761 podman[96872]: 2025-11-23 20:42:57.050020052 +0000 UTC m=+0.089710405 container init 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:57 np0005532761 podman[96872]: 2025-11-23 20:42:57.054689325 +0000 UTC m=+0.094379658 container start 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:42:57 np0005532761 bash[96872]: 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8
Nov 23 15:42:57 np0005532761 podman[96872]: 2025-11-23 20:42:56.979421059 +0000 UTC m=+0.019111412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:57 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:57 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev c7e1cad9-b30a-4887-b93f-2e90460faee8 (Updating nfs.cephfs deployment (+3 -> 3))
Nov 23 15:42:57 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event c7e1cad9-b30a-4887-b93f-2e90460faee8 (Updating nfs.cephfs deployment (+3 -> 3)) in 13 seconds
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:57 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 648f4046-5e90-4630-9dbf-9e0d21541f4b (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Nov 23 15:42:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:57 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.iwomei on compute-1
Nov 23 15:42:57 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.iwomei on compute-1
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:42:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:42:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:42:58 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 15 completed events
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532761 ceph-mon[74569]: Deploying daemon haproxy.nfs.cephfs.compute-1.iwomei on compute-1
Nov 23 15:42:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 63 op/s
Nov 23 15:42:59 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v39: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 67 op/s
Nov 23 15:43:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:43:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:43:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:43:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:01 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.uvukit on compute-0
Nov 23 15:43:01 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.uvukit on compute-0
Nov 23 15:43:02 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:02 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:02 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:02 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe84000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Nov 23 15:43:03 np0005532761 ceph-mon[74569]: Deploying daemon haproxy.nfs.cephfs.compute-0.uvukit on compute-0
Nov 23 15:43:04 np0005532761 podman[97032]: 2025-11-23 20:43:04.28779232 +0000 UTC m=+2.208367821 container create ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3 (image=quay.io/ceph/haproxy:2.3, name=frosty_feistel)
Nov 23 15:43:04 np0005532761 podman[97032]: 2025-11-23 20:43:04.274353756 +0000 UTC m=+2.194929277 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 15:43:04 np0005532761 systemd[1]: Started libpod-conmon-ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3.scope.
Nov 23 15:43:04 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:04 np0005532761 podman[97032]: 2025-11-23 20:43:04.359501771 +0000 UTC m=+2.280077312 container init ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3 (image=quay.io/ceph/haproxy:2.3, name=frosty_feistel)
Nov 23 15:43:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:04 np0005532761 podman[97032]: 2025-11-23 20:43:04.366533096 +0000 UTC m=+2.287108637 container start ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3 (image=quay.io/ceph/haproxy:2.3, name=frosty_feistel)
Nov 23 15:43:04 np0005532761 podman[97032]: 2025-11-23 20:43:04.36975085 +0000 UTC m=+2.290326381 container attach ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3 (image=quay.io/ceph/haproxy:2.3, name=frosty_feistel)
Nov 23 15:43:04 np0005532761 frosty_feistel[97149]: 0 0
Nov 23 15:43:04 np0005532761 systemd[1]: libpod-ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3.scope: Deactivated successfully.
Nov 23 15:43:04 np0005532761 podman[97032]: 2025-11-23 20:43:04.372129053 +0000 UTC m=+2.292704554 container died ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3 (image=quay.io/ceph/haproxy:2.3, name=frosty_feistel)
Nov 23 15:43:04 np0005532761 systemd[1]: var-lib-containers-storage-overlay-84cfc3b3b98727684759086aabfada98a4cbaf5c5d025bed929bbe1141c0c917-merged.mount: Deactivated successfully.
Nov 23 15:43:04 np0005532761 podman[97032]: 2025-11-23 20:43:04.410745516 +0000 UTC m=+2.331321017 container remove ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3 (image=quay.io/ceph/haproxy:2.3, name=frosty_feistel)
Nov 23 15:43:04 np0005532761 systemd[1]: libpod-conmon-ccde52b68e73e202dfbef45219c0c90c5c1230735c06892f8f99fec13a0030c3.scope: Deactivated successfully.
Nov 23 15:43:04 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:04 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:04 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:04 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:04 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:04 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:04 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v41: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Nov 23 15:43:05 np0005532761 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.uvukit for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:05 np0005532761 podman[97297]: 2025-11-23 20:43:05.197584017 +0000 UTC m=+0.036079138 container create cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:43:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7f3b174aefff5ca7934e96d8074eca0a63e7d1b84c2ff3e4309cce4e5340b3/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:05 np0005532761 podman[97297]: 2025-11-23 20:43:05.241923751 +0000 UTC m=+0.080418882 container init cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:43:05 np0005532761 podman[97297]: 2025-11-23 20:43:05.246897442 +0000 UTC m=+0.085392553 container start cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:43:05 np0005532761 bash[97297]: cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9
Nov 23 15:43:05 np0005532761 podman[97297]: 2025-11-23 20:43:05.182562093 +0000 UTC m=+0.021057224 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 15:43:05 np0005532761 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.uvukit for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [NOTICE] 326/204305 (2) : New worker #1 (4) forked
Nov 23 15:43:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:43:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:43:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:43:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:05 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.dxqoem on compute-2
Nov 23 15:43:05 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.dxqoem on compute-2
Nov 23 15:43:06 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:06 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:06 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:06 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:06 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c001cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v42: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Nov 23 15:43:07 np0005532761 ceph-mon[74569]: Deploying daemon haproxy.nfs.cephfs.compute-2.dxqoem on compute-2
Nov 23 15:43:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:08 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe84001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:08 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v43: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 23 15:43:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Nov 23 15:43:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.lwmzxc on compute-1
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.lwmzxc on compute-1
Nov 23 15:43:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:10 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:10 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0027d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v44: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 23 15:43:11 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:11 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe84001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:12 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:12 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:12 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:12 np0005532761 ceph-mon[74569]: Deploying daemon keepalived.nfs.cephfs.compute-1.lwmzxc on compute-1
Nov 23 15:43:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:12 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:43:12
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.meta', 'default.rgw.log', '.nfs', '.mgr', 'images', 'volumes', 'vms', 'cephfs.cephfs.data']
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:43:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:12 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v45: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 23 15:43:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Nov 23 15:43:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:43:12 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:43:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 23 15:43:13 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 23 15:43:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 23 15:43:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 23 15:43:13 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 23 15:43:13 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev cfdea3d2-be52-4347-8a78-257fb8454de8 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 23 15:43:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:43:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:13 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0027d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 23 15:43:14 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 5e38e27b-e6e7-40e7-9aaf-9e82771b9ea1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:14 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe840089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:14 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v48: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Nov 23 15:43:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 23 15:43:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:15 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 23 15:43:15 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 53 pg[6.0( v 49'39 (0'0,49'39] local-lis/les=18/19 n=22 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=53 pruub=11.071826935s) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 48'38 mlcod 48'38 active pruub 170.878845215s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:15 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 53 pg[6.0( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=53 pruub=11.071826935s) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 48'38 mlcod 0'0 unknown pruub 170.878845215s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 1fe4fa7d-b5cb-4ea5-9715-8dbd2696cf6e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:43:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.spcytb on compute-0
Nov 23 15:43:15 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.spcytb on compute-0
Nov 23 15:43:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:16 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 23 15:43:16 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev ac1d43b6-1604-489b-a3b3-eb25f8e1b57a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.c( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.b( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.8( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.a( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.9( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.e( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.f( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.2( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.5( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.3( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.4( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.6( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.7( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.1( v 49'39 (0'0,49'39] local-lis/les=18/19 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.d( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=18/19 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: Deploying daemon keepalived.nfs.cephfs.compute-0.spcytb on compute-0
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.c( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.a( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.9( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.2( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.0( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 48'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.3( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.5( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.4( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.8( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.7( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.d( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.1( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 54 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=18/18 les/c/f=19/19/0 sis=53) [1] r=0 lpr=53 pi=[18,53)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v51: 182 pgs: 46 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:43:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:16 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe840089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:43:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:17 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 23 15:43:17 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 23 15:43:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:17 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 23 15:43:17 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 2044327a-4079-49f7-8e42-182f1daa21d2 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:17 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:18 np0005532761 ceph-mgr[74869]: [progress WARNING root] Starting Global Recovery Event,108 pgs not in active + clean state
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 55 pg[8.0( v 50'45 (0'0,50'45] local-lis/les=39/40 n=5 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=13.183573723s) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 50'44 mlcod 50'44 active pruub 175.750305176s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 55 pg[9.0( v 43'12 (0'0,43'12] local-lis/les=42/43 n=6 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=55 pruub=14.071855545s) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 43'11 mlcod 43'11 active pruub 176.638732910s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 55 pg[9.0( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=55 pruub=14.071855545s) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 43'11 mlcod 0'0 unknown pruub 176.638732910s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 55 pg[8.0( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=13.183573723s) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 50'44 mlcod 0'0 unknown pruub 175.750305176s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559a271718c0) operator()   moving buffer(0x559a26010ca8 space 0x559a25520830 0x0~1000 clean)
Nov 23 15:43:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:18 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 23 15:43:18 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 61954f3e-3a3f-458a-8fae-1dd8b2cf6453 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.19( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.18( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1e( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.16( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.17( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1f( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.16( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.17( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.10( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.3( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.11( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.2( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.4( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.7( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.5( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.6( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.13( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.12( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.12( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.13( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1d( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1c( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1c( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1d( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1f( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.18( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1e( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.19( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1b( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1a( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1a( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1b( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.5( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.4( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.7( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.6( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1( v 50'45 (0'0,50'45] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1( v 43'12 (0'0,43'12] local-lis/les=42/43 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.a( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.b( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.d( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.c( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.c( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.d( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.f( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.e( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.b( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.a( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.8( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.9( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.9( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.8( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.e( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.f( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.3( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.2( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.10( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.14( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.11( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.15( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.14( v 50'45 lc 0'0 (0'0,50'45] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.19( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.15( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=42/43 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1e( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.17( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.16( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.18( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.16( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.17( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.3( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.10( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.4( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.2( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.11( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.7( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.13( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.12( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.6( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.12( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.13( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1c( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1c( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1d( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1d( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1f( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.18( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.19( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1b( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.5( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1e( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1a( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1a( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.4( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.7( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.5( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.0( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 50'44 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1b( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.6( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.0( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 43'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.a( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.1( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.d( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.c( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.c( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.f( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.d( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.b( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.b( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.e( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.a( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.9( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.8( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.8( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.3( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.e( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.f( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.2( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.1f( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.10( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.9( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.14( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.15( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.11( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[9.15( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=42/42 les/c/f=43/43/0 sis=55) [1] r=0 lpr=55 pi=[42,55)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 56 pg[8.14( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:18 np0005532761 podman[97422]: 2025-11-23 20:43:18.906123992 +0000 UTC m=+2.628733563 container create eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c (image=quay.io/ceph/keepalived:2.2.4, name=jolly_mclaren, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9)
Nov 23 15:43:18 np0005532761 podman[97422]: 2025-11-23 20:43:18.886863886 +0000 UTC m=+2.609473497 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 15:43:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v54: 244 pgs: 108 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:43:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:18 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:18 np0005532761 systemd[1]: Started libpod-conmon-eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c.scope.
Nov 23 15:43:18 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:18 np0005532761 podman[97422]: 2025-11-23 20:43:18.979695073 +0000 UTC m=+2.702304634 container init eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c (image=quay.io/ceph/keepalived:2.2.4, name=jolly_mclaren, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-type=git, build-date=2023-02-22T09:23:20)
Nov 23 15:43:18 np0005532761 podman[97422]: 2025-11-23 20:43:18.985691131 +0000 UTC m=+2.708300692 container start eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c (image=quay.io/ceph/keepalived:2.2.4, name=jolly_mclaren, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=1793)
Nov 23 15:43:18 np0005532761 podman[97422]: 2025-11-23 20:43:18.98873956 +0000 UTC m=+2.711349171 container attach eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c (image=quay.io/ceph/keepalived:2.2.4, name=jolly_mclaren, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, description=keepalived for Ceph, com.redhat.component=keepalived-container, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.expose-services=)
Nov 23 15:43:18 np0005532761 jolly_mclaren[97518]: 0 0
Nov 23 15:43:18 np0005532761 systemd[1]: libpod-eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c.scope: Deactivated successfully.
Nov 23 15:43:18 np0005532761 podman[97422]: 2025-11-23 20:43:18.990493176 +0000 UTC m=+2.713102747 container died eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c (image=quay.io/ceph/keepalived:2.2.4, name=jolly_mclaren, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, name=keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, architecture=x86_64)
Nov 23 15:43:19 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0a4e3b45baca0cfc5b44fc735671ec4c46d2819c59edc7b046a1d92fe7726713-merged.mount: Deactivated successfully.
Nov 23 15:43:19 np0005532761 podman[97422]: 2025-11-23 20:43:19.033907186 +0000 UTC m=+2.756516767 container remove eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c (image=quay.io/ceph/keepalived:2.2.4, name=jolly_mclaren, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, io.buildah.version=1.28.2, vendor=Red Hat, Inc.)
Nov 23 15:43:19 np0005532761 systemd[1]: libpod-conmon-eb133327abbfdeada3bcc5ce261ae00cf998f6c276f78cfde85eb4c1e19a1e3c.scope: Deactivated successfully.
Nov 23 15:43:19 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:19 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:19 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:19 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:19 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 23 15:43:19 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 23 15:43:19 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:19 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:19 np0005532761 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.spcytb for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:19 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe840096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 23 15:43:19 np0005532761 podman[97666]: 2025-11-23 20:43:19.858250971 +0000 UTC m=+0.031196480 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 15:43:20 np0005532761 podman[97666]: 2025-11-23 20:43:20.113444889 +0000 UTC m=+0.286390368 container create 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, name=keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=)
Nov 23 15:43:20 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f087a33fa995f5f3fa524e8a28d753e039e48a991a776d0365bdc0016de398/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:20 np0005532761 podman[97666]: 2025-11-23 20:43:20.301027742 +0000 UTC m=+0.473973291 container init 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container)
Nov 23 15:43:20 np0005532761 podman[97666]: 2025-11-23 20:43:20.307980215 +0000 UTC m=+0.480925714 container start 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, version=2.2.4)
Nov 23 15:43:20 np0005532761 bash[97666]: 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7
Nov 23 15:43:20 np0005532761 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.spcytb for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: Starting VRRP child process, pid=4
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: Startup complete
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: (VI_0) Entering BACKUP STATE (init)
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:20 2025: VRRP_Script(check_backend) succeeded
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 6f99838c-683b-4912-8fa4-6506cc54efd0 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev cfdea3d2-be52-4347-8a78-257fb8454de8 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event cfdea3d2-be52-4347-8a78-257fb8454de8 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 7 seconds
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 5e38e27b-e6e7-40e7-9aaf-9e82771b9ea1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 5e38e27b-e6e7-40e7-9aaf-9e82771b9ea1 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 6 seconds
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 1fe4fa7d-b5cb-4ea5-9715-8dbd2696cf6e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 1fe4fa7d-b5cb-4ea5-9715-8dbd2696cf6e (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev ac1d43b6-1604-489b-a3b3-eb25f8e1b57a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event ac1d43b6-1604-489b-a3b3-eb25f8e1b57a (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 2044327a-4079-49f7-8e42-182f1daa21d2 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 2044327a-4079-49f7-8e42-182f1daa21d2 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 3 seconds
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 61954f3e-3a3f-458a-8fae-1dd8b2cf6453 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 61954f3e-3a3f-458a-8fae-1dd8b2cf6453 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 2 seconds
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 6f99838c-683b-4912-8fa4-6506cc54efd0 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 6f99838c-683b-4912-8fa4-6506cc54efd0 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Nov 23 15:43:20 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 23 15:43:20 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:20 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v56: 306 pgs: 62 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:20 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.cpybdt on compute-2
Nov 23 15:43:21 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.cpybdt on compute-2
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 57 pg[11.0( v 47'48 (0'0,47'48] local-lis/les=46/47 n=8 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=15.846013069s) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 47'47 mlcod 47'47 active pruub 181.283203125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.0( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=15.846013069s) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 47'47 mlcod 0'0 unknown pruub 181.283203125s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.2( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.3( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.1( v 47'48 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.6( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.7( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.4( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.5( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.8( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.a( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.9( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.b( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.c( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.d( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.e( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.10( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.f( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.11( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.12( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.13( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.14( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.15( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.16( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.17( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.18( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.19( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.1a( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.1b( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.1c( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.1d( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.1e( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 58 pg[11.1f( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 23 15:43:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:21 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: Deploying daemon keepalived.nfs.cephfs.compute-2.cpybdt on compute-2
Nov 23 15:43:21 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 23 15:43:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 23 15:43:22 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.17( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.16( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.13( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.0( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 47'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.a( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.b( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.d( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.9( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.c( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.8( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.2( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.f( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.e( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.3( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.4( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.18( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.1a( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.1d( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.7( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.1e( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.19( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.1f( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.10( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.11( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.6( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.5( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.1( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.12( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.15( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.1c( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.14( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 59 pg[11.1b( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 23 15:43:22 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 23 15:43:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:22 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe840096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v59: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:43:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:22 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:23 np0005532761 python3[97716]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:43:23 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 22 completed events
Nov 23 15:43:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:43:23 np0005532761 podman[97717]: 2025-11-23 20:43:23.164440442 +0000 UTC m=+0.045493894 container create d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:43:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:23 np0005532761 systemd[1]: Started libpod-conmon-d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac.scope.
Nov 23 15:43:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2659f859f7f9e4b644633c30984e987ef215c187e937fd55f4ac33e3193b25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2659f859f7f9e4b644633c30984e987ef215c187e937fd55f4ac33e3193b25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:23 np0005532761 podman[97717]: 2025-11-23 20:43:23.144711965 +0000 UTC m=+0.025765447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:43:23 np0005532761 podman[97717]: 2025-11-23 20:43:23.251038335 +0000 UTC m=+0.132091807 container init d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:43:23 np0005532761 podman[97717]: 2025-11-23 20:43:23.257099195 +0000 UTC m=+0.138152647 container start d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:43:23 np0005532761 podman[97717]: 2025-11-23 20:43:23.260849923 +0000 UTC m=+0.141903375 container attach d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:43:23 np0005532761 interesting_mcclintock[97732]: could not fetch user info: no user info saved
Nov 23 15:43:23 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 23 15:43:23 np0005532761 systemd[1]: libpod-d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac.scope: Deactivated successfully.
Nov 23 15:43:23 np0005532761 podman[97717]: 2025-11-23 20:43:23.479190384 +0000 UTC m=+0.360243836 container died d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:43:23 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 23 15:43:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ab2659f859f7f9e4b644633c30984e987ef215c187e937fd55f4ac33e3193b25-merged.mount: Deactivated successfully.
Nov 23 15:43:23 np0005532761 podman[97717]: 2025-11-23 20:43:23.513189517 +0000 UTC m=+0.394242969 container remove d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:43:23 np0005532761 systemd[1]: libpod-conmon-d891a6b0ddbf0fd5f8ff867c500c0e03a646c009f04ef91d0fd9b0245c4e9bac.scope: Deactivated successfully.
Nov 23 15:43:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:23 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:23 np0005532761 python3[97856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:43:23 np0005532761 podman[97857]: 2025-11-23 20:43:23.903666934 +0000 UTC m=+0.054975374 container create 6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855 (image=quay.io/ceph/ceph:v19, name=nostalgic_shockley, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 15:43:23 np0005532761 systemd[1]: Started libpod-conmon-6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855.scope.
Nov 23 15:43:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:23 2025: (VI_0) Entering MASTER STATE
Nov 23 15:43:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c905a6359c09a57c238c6b416c97128b1933c8ece986b8e2cefae18fff3e0f75/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c905a6359c09a57c238c6b416c97128b1933c8ece986b8e2cefae18fff3e0f75/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:23 np0005532761 podman[97857]: 2025-11-23 20:43:23.971471944 +0000 UTC m=+0.122780404 container init 6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855 (image=quay.io/ceph/ceph:v19, name=nostalgic_shockley, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Nov 23 15:43:23 np0005532761 podman[97857]: 2025-11-23 20:43:23.880889496 +0000 UTC m=+0.032197986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:43:23 np0005532761 podman[97857]: 2025-11-23 20:43:23.976943248 +0000 UTC m=+0.128251688 container start 6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855 (image=quay.io/ceph/ceph:v19, name=nostalgic_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:43:23 np0005532761 podman[97857]: 2025-11-23 20:43:23.979719351 +0000 UTC m=+0.131027791 container attach 6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855 (image=quay.io/ceph/ceph:v19, name=nostalgic_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:43:24 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]: {
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "user_id": "openstack",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "display_name": "openstack",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "email": "",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "suspended": 0,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "max_buckets": 1000,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "subusers": [],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "keys": [
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        {
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:            "user": "openstack",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:            "access_key": "UV8B2RO7TZRMSBC5XPIA",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:            "secret_key": "g7QyK2HSgYwBSRAxRtwSHPos1fRizL1DF2pwAbus",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:            "active": true,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:            "create_date": "2025-11-23T20:43:24.147870Z"
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        }
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    ],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "swift_keys": [],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "caps": [],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "op_mask": "read, write, delete",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "default_placement": "",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "default_storage_class": "",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "placement_tags": [],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "bucket_quota": {
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "enabled": false,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "check_on_raw": false,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "max_size": -1,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "max_size_kb": 0,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "max_objects": -1
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    },
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "user_quota": {
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "enabled": false,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "check_on_raw": false,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "max_size": -1,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "max_size_kb": 0,
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:        "max_objects": -1
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    },
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "temp_url_keys": [],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "type": "rgw",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "mfa_ids": [],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "account_id": "",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "path": "/",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "create_date": "2025-11-23T20:43:24.147158Z",
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "tags": [],
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]:    "group_ids": []
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]: }
Nov 23 15:43:24 np0005532761 nostalgic_shockley[97872]: 
Nov 23 15:43:24 np0005532761 systemd[1]: libpod-6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855.scope: Deactivated successfully.
Nov 23 15:43:24 np0005532761 podman[97857]: 2025-11-23 20:43:24.211560855 +0000 UTC m=+0.362869295 container died 6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855 (image=quay.io/ceph/ceph:v19, name=nostalgic_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:43:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c905a6359c09a57c238c6b416c97128b1933c8ece986b8e2cefae18fff3e0f75-merged.mount: Deactivated successfully.
Nov 23 15:43:24 np0005532761 podman[97857]: 2025-11-23 20:43:24.246506112 +0000 UTC m=+0.397814552 container remove 6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855 (image=quay.io/ceph/ceph:v19, name=nostalgic_shockley, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:43:24 np0005532761 systemd[1]: libpod-conmon-6b8dca8c77c72a7a538123280863af43150f296fb36e0dba45f22a5e9be20855.scope: Deactivated successfully.
Nov 23 15:43:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:24 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 23 15:43:24 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 23 15:43:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:24 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:24 np0005532761 python3[97995]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:43:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:43:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:24 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe840096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:25 np0005532761 ceph-mgr[74869]: [dashboard INFO request] [192.168.122.100:47006] [GET] [200] [0.110s] [6.3K] [eb36e9c1-92f4-4adc-a932-35d666c701d2] /
Nov 23 15:43:25 np0005532761 python3[98019]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:43:25 np0005532761 ceph-mgr[74869]: [dashboard INFO request] [192.168.122.100:47008] [GET] [200] [0.002s] [6.3K] [88cb9229-b335-4a49-a7fc-12bfe6164a81] /
Nov 23 15:43:25 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Nov 23 15:43:25 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Nov 23 15:43:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:25 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 648f4046-5e90-4630-9dbf-9e0d21541f4b (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 23 15:43:26 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 648f4046-5e90-4630-9dbf-9e0d21541f4b (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 29 seconds
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 79059a31-cde3-4aa3-9ba9-45217836dae0 (Updating alertmanager deployment (+1 -> 1))
Nov 23 15:43:26 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Nov 23 15:43:26 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Nov 23 15:43:26 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 23 15:43:26 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 23 15:43:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:26 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v61: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 3 op/s
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:26 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 15:43:26 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 23 15:43:27 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.17( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008364677s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.456634521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.16( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.011703491s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460052490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.14( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.432256699s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880599976s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.16( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.011672020s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460052490s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.17( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008251190s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.456634521s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.14( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.432208061s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880599976s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.13( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.011292458s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460113525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.13( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.011278152s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460113525s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.15( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431838989s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880569458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.15( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431756020s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.880599976s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.10( v 58'48 (0'0,58'48] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431534767s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=58'46 lcod 58'47 mlcod 58'47 active pruub 186.880447388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.15( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431677818s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880569458s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.15( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431679726s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880599976s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.10( v 58'48 (0'0,58'48] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431506157s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=58'46 lcod 58'47 mlcod 0'0 unknown NOTIFY pruub 186.880447388s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.11( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431618690s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.880584717s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.11( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431546211s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880584717s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.3( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431227684s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880355835s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.d( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.392181396s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.841369629s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.3( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431201935s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880355835s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.1( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.392157555s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.841369629s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.d( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.392154694s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.841369629s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.1( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.392139435s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.841369629s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.f( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431088448s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880371094s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.f( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431026459s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880371094s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.e( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.431003571s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.880371094s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.e( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430974007s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880371094s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.8( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430688858s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880218506s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.a( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.010549545s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460144043s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.8( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430657387s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880218506s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.a( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.010490417s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460144043s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.9( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430445671s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.880096436s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.9( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430341721s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880096436s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.8( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430384636s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.880218506s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.8( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430369377s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880218506s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.7( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.391469002s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.841354370s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.9( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430580139s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880477905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.a( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430163383s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880081177s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.9( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430551529s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880477905s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.a( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.430150032s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880081177s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.7( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.391441345s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.841354370s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.f( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.429633141s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.879867554s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.f( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.429601669s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.879867554s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.e( v 59'57 (0'0,59'57] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.010163307s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=59'57 lcod 59'56 mlcod 59'56 active pruub 182.460525513s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.b( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.429504395s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.879852295s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.3( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.390499115s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.840911865s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.e( v 59'57 (0'0,59'57] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.010117531s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=59'57 lcod 59'56 mlcod 0'0 unknown NOTIFY pruub 182.460525513s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.b( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.429427147s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.879852295s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.3( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.390465736s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.840911865s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.f( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009927750s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460510254s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.d( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.429549217s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.879837036s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.f( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009905815s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460510254s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.d( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.429221153s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.879837036s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.c( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.424657822s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.875396729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.d( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.424571991s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.875320435s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.d( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.424548149s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.875320435s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.8( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009648323s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460433960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.8( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009584427s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460433960s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.5( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.390031815s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.840927124s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.c( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.424633026s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.875396729s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.5( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.390011787s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.840927124s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.b( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.428950310s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.879928589s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.a( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.424255371s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.875274658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=53/54 n=3 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.389858246s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.840927124s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.b( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.428887367s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.879928589s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.a( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.424233437s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.875274658s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=53/54 n=3 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.389839172s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.840927124s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.3( v 59'57 (0'0,59'57] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009260178s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=59'57 lcod 59'56 mlcod 59'56 active pruub 182.460525513s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.9( v 49'39 (0'0,49'39] local-lis/les=53/54 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.389457703s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.840759277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.6( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423819542s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.875183105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.9( v 49'39 (0'0,49'39] local-lis/les=53/54 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.389416695s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.840759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.3( v 59'57 (0'0,59'57] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009202003s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=59'57 lcod 59'56 mlcod 0'0 unknown NOTIFY pruub 182.460525513s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.6( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423766136s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.875183105s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.4( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009074211s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460601807s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.4( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009049416s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460601807s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.7( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.009011269s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460647583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.1b( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423453331s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.875198364s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.7( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008911133s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460647583s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.19( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008943558s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460708618s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.4( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423303604s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.875091553s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.1b( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423436165s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.875198364s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.19( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008918762s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460708618s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.5( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423320770s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.875183105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.5( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423298836s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.875183105s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.4( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.423275948s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.875091553s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1d( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008493423s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460678101s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.18( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.422652245s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874862671s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.19( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.422652245s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874877930s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1d( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008450508s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460678101s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1a( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008350372s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460617065s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.19( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.422625542s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874877930s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.18( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.422620773s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874862671s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1a( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008327484s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460617065s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1e( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008374214s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460693359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1e( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.008330345s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460693359s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.1c( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421967506s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874572754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.12( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421843529s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874496460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.1d( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.422186852s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874832153s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.12( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421809196s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874496460s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.1d( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.422095299s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874832153s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.1c( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421916962s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874572754s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.12( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421743393s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874542236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.12( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421719551s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874542236s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.7( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421438217s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874450684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.6( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421439171s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874481201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.13( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421385765s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874481201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.6( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421419144s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874481201s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.7( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421390533s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874450684s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.13( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421368599s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874481201s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.5( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.007577896s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460845947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.380643845s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.833923340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.5( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421580315s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874923706s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.380583763s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.833923340s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.3( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420958519s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874328613s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.5( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.007527351s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460845947s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.2( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421027184s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874420166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.3( v 43'12 (0'0,43'12] local-lis/les=55/56 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420941353s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874328613s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.5( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421556473s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874923706s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.2( v 50'45 (0'0,50'45] local-lis/les=55/56 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.421010017s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874420166s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.007301331s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460845947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.12( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.007212639s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460861206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.10( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420700073s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874343872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1( v 47'48 (0'0,47'48] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.007198334s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460845947s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.16( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420584679s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874267578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.11( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420756340s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874435425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.16( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420563698s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874267578s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.11( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420716286s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874435425s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.12( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.007189751s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460861206s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.17( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420495987s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874328613s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.10( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420676231s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874343872s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.17( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420467377s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874328613s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.17( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420374870s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874328613s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.17( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420266151s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874328613s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.18( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420031548s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.874191284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1b( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.006723404s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460891724s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.18( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.420008659s) [0] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874191284s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1b( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.006691933s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460891724s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.16( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.419994354s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active pruub 186.874252319s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1c( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.006622314s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 182.460906982s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[9.16( v 43'12 (0'0,43'12] local-lis/les=55/56 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.419965744s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.874252319s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.1c( v 47'48 (0'0,47'48] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.006604195s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.460906982s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.1f( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.426088333s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active pruub 186.880416870s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[8.1f( v 50'45 (0'0,50'45] local-lis/les=55/56 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60 pruub=15.426056862s) [2] r=-1 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.880416870s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.14( v 59'57 (0'0,59'57] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.006424904s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=59'57 lcod 59'56 mlcod 59'56 active pruub 182.460906982s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[11.14( v 59'57 (0'0,59'57] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=11.006389618s) [0] r=-1 lpr=60 pi=[57,60)/1 crt=59'57 lcod 59'56 mlcod 0'0 unknown NOTIFY pruub 182.460906982s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.10( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.18( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.1b( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.12( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.f( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.1e( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.6( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.2( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.3( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.8( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.a( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.6( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.e( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.9( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.b( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.8( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.10( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.13( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[12.19( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 60 pg[7.4( empty local-lis/les=0/0 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 23 15:43:27 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 23 15:43:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:27 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe8400a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 23 completed events
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:28 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 88855d57-ffce-4d80-9de2-ff9408a45bdd (Global Recovery Event) in 10 seconds
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.459980177 +0000 UTC m=+1.544829686 volume create 92e03a4b1552ed8b453eecf42fa7c352798de02ded4b0003ea651f6ec43ec4b7
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.470131473 +0000 UTC m=+1.554981002 container create 8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc (image=quay.io/prometheus/alertmanager:v0.25.0, name=gifted_roentgen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.10( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.18( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.1e( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.9( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.b( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.8( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.13( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.4( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.3( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.e( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.2( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.6( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[7.1b( empty local-lis/les=60/61 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60) [1] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=60/61 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532761 systemd[1]: Started libpod-conmon-8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc.scope.
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.445532517 +0000 UTC m=+1.530382056 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 23 15:43:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f6d1654b17aa9e1e2d1bb1898727c6da69422d8d7d83b9b4843a588b260408/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.556522261 +0000 UTC m=+1.641371790 container init 8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc (image=quay.io/prometheus/alertmanager:v0.25.0, name=gifted_roentgen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.562246191 +0000 UTC m=+1.647095700 container start 8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc (image=quay.io/prometheus/alertmanager:v0.25.0, name=gifted_roentgen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 gifted_roentgen[98245]: 65534 65534
Nov 23 15:43:28 np0005532761 systemd[1]: libpod-8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc.scope: Deactivated successfully.
Nov 23 15:43:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:28 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.615109718 +0000 UTC m=+1.699959227 container attach 8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc (image=quay.io/prometheus/alertmanager:v0.25.0, name=gifted_roentgen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.615505799 +0000 UTC m=+1.700355308 container died 8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc (image=quay.io/prometheus/alertmanager:v0.25.0, name=gifted_roentgen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-76f6d1654b17aa9e1e2d1bb1898727c6da69422d8d7d83b9b4843a588b260408-merged.mount: Deactivated successfully.
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.71764828 +0000 UTC m=+1.802497789 container remove 8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc (image=quay.io/prometheus/alertmanager:v0.25.0, name=gifted_roentgen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 podman[98110]: 2025-11-23 20:43:28.724827018 +0000 UTC m=+1.809676527 volume remove 92e03a4b1552ed8b453eecf42fa7c352798de02ded4b0003ea651f6ec43ec4b7
Nov 23 15:43:28 np0005532761 systemd[1]: libpod-conmon-8deb958f5a0d826a226f1246a7f2613cbdd01769247a464ecf4f820c5947e6dc.scope: Deactivated successfully.
Nov 23 15:43:28 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.783444637 +0000 UTC m=+0.038734278 volume create 6fbe4850a06ae95d5d3807daf8ae169658e029ce476b9b2dabd3bbff463dc2f3
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.794552418 +0000 UTC m=+0.049842059 container create d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 systemd[1]: Started libpod-conmon-d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f.scope.
Nov 23 15:43:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:28 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0955d357d3a9b8bf846c36e7fb49904df3d4fdce43b5fd96d2d46812deb1e0/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.852738605 +0000 UTC m=+0.108028266 container init d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.85866539 +0000 UTC m=+0.113955021 container start d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 cranky_benz[98280]: 65534 65534
Nov 23 15:43:28 np0005532761 systemd[1]: libpod-d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f.scope: Deactivated successfully.
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.765093135 +0000 UTC m=+0.020382806 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.862012559 +0000 UTC m=+0.117302200 container attach d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.862232554 +0000 UTC m=+0.117522195 container died d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2e0955d357d3a9b8bf846c36e7fb49904df3d4fdce43b5fd96d2d46812deb1e0-merged.mount: Deactivated successfully.
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.897703185 +0000 UTC m=+0.152992826 container remove d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:28 np0005532761 podman[98262]: 2025-11-23 20:43:28.903685732 +0000 UTC m=+0.158975373 volume remove 6fbe4850a06ae95d5d3807daf8ae169658e029ce476b9b2dabd3bbff463dc2f3
Nov 23 15:43:28 np0005532761 systemd[1]: libpod-conmon-d097f36b732872bd8f8cbb1542e279f5c3bad3423c0e9e263c758e507e34fa2f.scope: Deactivated successfully.
Nov 23 15:43:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 4 op/s
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Nov 23 15:43:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 15:43:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:28 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:28 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 15:43:29 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:29 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:29 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:29 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:29 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62) [1] r=0 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.335160255s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.841369629s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.335133553s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.841369629s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.2( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.334313393s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.840881348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.2( v 49'39 (0'0,49'39] local-lis/les=53/54 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.334293365s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.840881348s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.334093094s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.840759277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=53/54 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.334064484s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.840759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.a( v 49'39 (0'0,49'39] local-lis/les=53/54 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.333758354s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 184.840713501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 62 pg[6.a( v 49'39 (0'0,49'39] local-lis/les=53/54 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=11.333720207s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.840713501s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532761 systemd[1]: Starting Ceph alertmanager.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 23 15:43:29 np0005532761 podman[98422]: 2025-11-23 20:43:29.736788167 +0000 UTC m=+0.037905756 volume create 3bff29a126d41977d2ef68aaaf86ce768a3d5f4974f61dc4c54cf30b2407f331
Nov 23 15:43:29 np0005532761 podman[98422]: 2025-11-23 20:43:29.750563588 +0000 UTC m=+0.051681147 container create 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:29 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:29 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff08e9448a4e6e871027af73888fe043d3ec51aa1595a432d7bb59a854964e3d/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff08e9448a4e6e871027af73888fe043d3ec51aa1595a432d7bb59a854964e3d/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:29 np0005532761 podman[98422]: 2025-11-23 20:43:29.809236058 +0000 UTC m=+0.110353627 container init 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:29 np0005532761 podman[98422]: 2025-11-23 20:43:29.814101636 +0000 UTC m=+0.115219185 container start 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:29 np0005532761 podman[98422]: 2025-11-23 20:43:29.719878543 +0000 UTC m=+0.020996112 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 23 15:43:29 np0005532761 bash[98422]: 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006
Nov 23 15:43:29 np0005532761 systemd[1]: Started Ceph alertmanager.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.842Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.842Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.851Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.853Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.889Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.890Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:29 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.895Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Nov 23 15:43:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:29.896Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 79059a31-cde3-4aa3-9ba9-45217836dae0 (Updating alertmanager deployment (+1 -> 1))
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 79059a31-cde3-4aa3-9ba9-45217836dae0 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 1481368b-efd3-429e-952b-08a7f071521d (Updating grafana deployment (+1 -> 1))
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 23 15:43:30 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:30 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe8400a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 23 15:43:30 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 23 15:43:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 1 active+recovering+remapped, 8 unknown, 6 active+recovery_wait+remapped, 9 active+remapped, 4 peering, 309 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 35/182 objects misplaced (19.231%); 954 B/s, 2 keys/s, 20 objects/s recovering
Nov 23 15:43:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:30 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: Regenerating cephadm self-signed grafana TLS certificates
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: Deploying daemon grafana.compute-0 on compute-0
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 23 15:43:31 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 23 15:43:31 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 23 15:43:31 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 23 15:43:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:31 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:31.854Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000206757s
Nov 23 15:43:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 23 15:43:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 23 15:43:32 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 65 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 23 15:43:32 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 23 15:43:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:32 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 1 active+recovering+remapped, 8 unknown, 6 active+recovery_wait+remapped, 9 active+remapped, 4 peering, 309 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 35/182 objects misplaced (19.231%); 954 B/s, 2 keys/s, 20 objects/s recovering
Nov 23 15:43:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:32 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe8400a3f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:33 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 25 completed events
Nov 23 15:43:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:43:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:33 np0005532761 ceph-mgr[74869]: [progress WARNING root] Starting Global Recovery Event,28 pgs not in active + clean state
Nov 23 15:43:33 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 23 15:43:33 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 23 15:43:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:33 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 23 15:43:34 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:34 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:34 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 23 15:43:34 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 23 15:43:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v71: 337 pgs: 1 active+recovering+remapped, 8 unknown, 6 active+recovery_wait+remapped, 9 active+remapped, 4 peering, 309 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 35/182 objects misplaced (19.231%); 700 B/s, 1 keys/s, 14 objects/s recovering
Nov 23 15:43:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:34 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 23 15:43:35 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 66 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65) [1] r=0 lpr=65 pi=[57,65)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 23 15:43:35 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 23 15:43:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:35 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:36 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:36 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Nov 23 15:43:36 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Nov 23 15:43:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 682 B/s wr, 89 op/s; 226 B/s, 12 objects/s recovering
Nov 23 15:43:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 23 15:43:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 23 15:43:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:36 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:37 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 23 15:43:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 23 15:43:37 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 23 15:43:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:37 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:38 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event c2fe8990-ebff-4aa5-b7c2-fa35594b2c02 (Global Recovery Event) in 5 seconds
Nov 23 15:43:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:38 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe580016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:38 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 23 15:43:38 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 23 15:43:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 551 B/s wr, 72 op/s; 183 B/s, 10 objects/s recovering
Nov 23 15:43:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 23 15:43:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 23 15:43:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:38 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 23 15:43:39 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 23 15:43:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:39 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:39 np0005532761 podman[98552]: 2025-11-23 20:43:39.54220185 +0000 UTC m=+8.732408089 container create b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387 (image=quay.io/ceph/grafana:10.4.0, name=dazzling_tharp, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 podman[98552]: 2025-11-23 20:43:39.521404464 +0000 UTC m=+8.711610703 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 23 15:43:39 np0005532761 systemd[1]: Started libpod-conmon-b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387.scope.
Nov 23 15:43:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:39 np0005532761 podman[98552]: 2025-11-23 20:43:39.61941578 +0000 UTC m=+8.809622079 container init b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387 (image=quay.io/ceph/grafana:10.4.0, name=dazzling_tharp, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 podman[98552]: 2025-11-23 20:43:39.626095472 +0000 UTC m=+8.816301711 container start b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387 (image=quay.io/ceph/grafana:10.4.0, name=dazzling_tharp, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 podman[98552]: 2025-11-23 20:43:39.629560941 +0000 UTC m=+8.819767190 container attach b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387 (image=quay.io/ceph/grafana:10.4.0, name=dazzling_tharp, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 dazzling_tharp[98773]: 472 0
Nov 23 15:43:39 np0005532761 systemd[1]: libpod-b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387.scope: Deactivated successfully.
Nov 23 15:43:39 np0005532761 podman[98552]: 2025-11-23 20:43:39.630246039 +0000 UTC m=+8.820452288 container died b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387 (image=quay.io/ceph/grafana:10.4.0, name=dazzling_tharp, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Nov 23 15:43:39 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Nov 23 15:43:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9e9b807498aa8691e4deeac3fe9e66b14f4f01b71b925b11cf1f788e9dbaa1e7-merged.mount: Deactivated successfully.
Nov 23 15:43:39 np0005532761 podman[98552]: 2025-11-23 20:43:39.681240033 +0000 UTC m=+8.871446292 container remove b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387 (image=quay.io/ceph/grafana:10.4.0, name=dazzling_tharp, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 systemd[1]: libpod-conmon-b351a6216d96ffcb1645ab83173808524f49bd130a28cd6f0e725a132682b387.scope: Deactivated successfully.
Nov 23 15:43:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:39 np0005532761 podman[98788]: 2025-11-23 20:43:39.744860272 +0000 UTC m=+0.039840767 container create 49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 systemd[1]: Started libpod-conmon-49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52.scope.
Nov 23 15:43:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:39 np0005532761 podman[98788]: 2025-11-23 20:43:39.797978141 +0000 UTC m=+0.092958666 container init 49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:39 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:39 np0005532761 podman[98788]: 2025-11-23 20:43:39.802367995 +0000 UTC m=+0.097348490 container start 49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 intelligent_cannon[98804]: 472 0
Nov 23 15:43:39 np0005532761 systemd[1]: libpod-49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52.scope: Deactivated successfully.
Nov 23 15:43:39 np0005532761 podman[98788]: 2025-11-23 20:43:39.805653219 +0000 UTC m=+0.100633714 container attach 49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 conmon[98804]: conmon 49bd3ea75867ffd0b326 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52.scope/container/memory.events
Nov 23 15:43:39 np0005532761 podman[98788]: 2025-11-23 20:43:39.80648242 +0000 UTC m=+0.101462925 container died 49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 podman[98788]: 2025-11-23 20:43:39.726366205 +0000 UTC m=+0.021346730 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 23 15:43:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ef75070e60137974e4f053effe24a2ae13c748a25bbfd5f8aacec4ced7a297c2-merged.mount: Deactivated successfully.
Nov 23 15:43:39 np0005532761 podman[98788]: 2025-11-23 20:43:39.842845477 +0000 UTC m=+0.137825962 container remove 49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_cannon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:39 np0005532761 systemd[1]: libpod-conmon-49bd3ea75867ffd0b326c9073a16d14d93eede99f80760cea0f4f0bfd103ee52.scope: Deactivated successfully.
Nov 23 15:43:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:43:39.857Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003872512s
Nov 23 15:43:39 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:40 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:40 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 23 15:43:40 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:40 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:40 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532761 systemd[1]: Starting Ceph grafana.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:40 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:40 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 23 15:43:40 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 23 15:43:40 np0005532761 podman[98943]: 2025-11-23 20:43:40.801071204 +0000 UTC m=+0.041364298 container create 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098cccf79c49496c75e4e40d3effa42af15b94c0911522fe67bfd1d617ca8a90/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098cccf79c49496c75e4e40d3effa42af15b94c0911522fe67bfd1d617ca8a90/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098cccf79c49496c75e4e40d3effa42af15b94c0911522fe67bfd1d617ca8a90/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098cccf79c49496c75e4e40d3effa42af15b94c0911522fe67bfd1d617ca8a90/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098cccf79c49496c75e4e40d3effa42af15b94c0911522fe67bfd1d617ca8a90/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:40 np0005532761 podman[98943]: 2025-11-23 20:43:40.866297244 +0000 UTC m=+0.106590368 container init 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:40 np0005532761 podman[98943]: 2025-11-23 20:43:40.873054509 +0000 UTC m=+0.113347603 container start 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:43:40 np0005532761 bash[98943]: 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b
Nov 23 15:43:40 np0005532761 podman[98943]: 2025-11-23 20:43:40.781149241 +0000 UTC m=+0.021442355 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 23 15:43:40 np0005532761 systemd[1]: Started Ceph grafana.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:43:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 active+recovering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 86 op/s; 3/226 objects degraded (1.327%); 2/226 objects misplaced (0.885%); 226 B/s, 12 objects/s recovering
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:43:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:40 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe580016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 1481368b-efd3-429e-952b-08a7f071521d (Updating grafana deployment (+1 -> 1))
Nov 23 15:43:41 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 1481368b-efd3-429e-952b-08a7f071521d (Updating grafana deployment (+1 -> 1)) in 11 seconds
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037411475Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-11-23T20:43:41Z
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037682592Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037698823Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037703163Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037706843Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037710393Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037714213Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037718343Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037722323Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037726143Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037729463Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037733173Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037736703Z level=info msg=Target target=[all]
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037744034Z level=info msg="Path Home" path=/usr/share/grafana
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037747254Z level=info msg="Path Data" path=/var/lib/grafana
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037750414Z level=info msg="Path Logs" path=/var/log/grafana
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037753734Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037757454Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=settings t=2025-11-23T20:43:41.037760784Z level=info msg="App mode production"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=sqlstore t=2025-11-23T20:43:41.038117783Z level=info msg="Connecting to DB" dbtype=sqlite3
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=sqlstore t=2025-11-23T20:43:41.038140204Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.038859363Z level=info msg="Starting DB migrations"
Nov 23 15:43:41 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev 43e1f0b8-48fc-4af3-988f-0ebf3e76eef0 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.041409918Z level=info msg="Executing migration" id="create migration_log table"
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.042901507Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.49299ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.0531187Z level=info msg="Executing migration" id="create user table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.05466985Z level=info msg="Migration successfully executed" id="create user table" duration=1.55ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.063318012Z level=info msg="Executing migration" id="add unique index user.login"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.0643883Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.070548ms
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.pteysg on compute-0
Nov 23 15:43:41 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.pteysg on compute-0
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.076349688Z level=info msg="Executing migration" id="add unique index user.email"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.077095258Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=747.65µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.08534357Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.08611195Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=770.11µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.087968168Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.088590904Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=622.566µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.090620717Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.092892145Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.271018ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.096010915Z level=info msg="Executing migration" id="create user table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.096793945Z level=info msg="Migration successfully executed" id="create user table v2" duration=785.1µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.098952911Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.100019238Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.066267ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.101756174Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.102878092Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.121118ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.105751726Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.10629009Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=538.374µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.108199729Z level=info msg="Executing migration" id="Drop old table user_v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.108743274Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=541.225µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.110814497Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.111741681Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=941.295µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.113570108Z level=info msg="Executing migration" id="Update user table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.113684891Z level=info msg="Migration successfully executed" id="Update user table charset" duration=115.363µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.131286585Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.133404829Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.123565ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.135326749Z level=info msg="Executing migration" id="Add missing user data"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.13576667Z level=info msg="Migration successfully executed" id="Add missing user data" duration=442.131µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.139031814Z level=info msg="Executing migration" id="Add is_disabled column to user"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.140128492Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.096598ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.141866927Z level=info msg="Executing migration" id="Add index user.login/user.email"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.142546045Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=679.238µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.144821953Z level=info msg="Executing migration" id="Add is_service_account column to user"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.146041994Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.219811ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.148871567Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.159389709Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.517622ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.162926389Z level=info msg="Executing migration" id="Add uid column to user"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.16449627Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.569731ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.166740268Z level=info msg="Executing migration" id="Update uid column values for users"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.167078757Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=338.648µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.169083149Z level=info msg="Executing migration" id="Add unique index user_uid"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.169974831Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=891.562µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.172867496Z level=info msg="Executing migration" id="create temp user table v1-7"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.173624276Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=756.28µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.176360176Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.177068834Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=708.338µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.179092337Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.179753393Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=660.846µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.182259858Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.183232183Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=973.155µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.185408089Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.186107997Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=699.758µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.188259712Z level=info msg="Executing migration" id="Update temp_user table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.188413916Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=151.434µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.190254034Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.191099765Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=842.111µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.193236531Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.194158725Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=922.234µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.196014103Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.196796022Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=781.679µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.198899197Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.199613985Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=714.628µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.20175187Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.205162628Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.410398ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.20676402Z level=info msg="Executing migration" id="create temp_user v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.207677533Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=911.423µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.209621174Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.210394593Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=772.969µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.212272581Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.213052532Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=779.851µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.214650193Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.215480254Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=832.292µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.216953852Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.21765085Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=696.958µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.22036019Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.22071185Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=351.66µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.222741191Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.223285335Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=544.104µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.22581484Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.22618446Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=369.66µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.228179752Z level=info msg="Executing migration" id="create star table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.228740406Z level=info msg="Migration successfully executed" id="create star table" duration=560.404µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.230881411Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.231487177Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=605.246µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.233640562Z level=info msg="Executing migration" id="create org table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.234268458Z level=info msg="Migration successfully executed" id="create org table v1" duration=627.446µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.236616609Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.237234585Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=617.736µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.239731009Z level=info msg="Executing migration" id="create org_user table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.240429558Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=698.989µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.243451505Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.244149213Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=697.608µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.246296949Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.246966035Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=669.016µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.249154342Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.249900601Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=745.939µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.252131709Z level=info msg="Executing migration" id="Update org table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.252201431Z level=info msg="Migration successfully executed" id="Update org table charset" duration=70.332µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.253815052Z level=info msg="Executing migration" id="Update org_user table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.253871384Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=57.161µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.256222564Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.256407059Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=184.685µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.259391176Z level=info msg="Executing migration" id="create dashboard table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.260078664Z level=info msg="Migration successfully executed" id="create dashboard table" duration=687.338µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.262412303Z level=info msg="Executing migration" id="add index dashboard.account_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.263102532Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=688.209µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.265235026Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.265947985Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=712.729µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.267877144Z level=info msg="Executing migration" id="create dashboard_tag table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.268434679Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=557.395µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.270215895Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.270850751Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=634.576µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.273628863Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.274321781Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=693.588µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.276358393Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.280847168Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.488175ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.282756338Z level=info msg="Executing migration" id="create dashboard v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.283408425Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=653.577µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.286221857Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.286904015Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=682.378µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.289737058Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.290389915Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=652.576µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.292486339Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.292857198Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=370.679µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.295353713Z level=info msg="Executing migration" id="drop table dashboard_v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.296476592Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.123349ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.298709869Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.298823562Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=92.522µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.301337247Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.302856026Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.518679ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.305115675Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.306628123Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.511968ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.308235105Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.309626621Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.391086ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.311206981Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.31191075Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=702.199µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.313896521Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.315285626Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.389435ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.317143554Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.317977756Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=834.152µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.319943306Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.320614854Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=671.428µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.323242692Z level=info msg="Executing migration" id="Update dashboard table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.323316384Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=72.182µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.325256743Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.325320955Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=64.912µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.327548083Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.329327988Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.780175ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.331054643Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.33249521Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.440837ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.334800989Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.336311548Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.508369ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.339662144Z level=info msg="Executing migration" id="Add column uid in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.341237476Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.575212ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.343163815Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.34335618Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=192.695µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.345583978Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.346233394Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=648.976µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.349007975Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.349759675Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=751.64µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.351951731Z level=info msg="Executing migration" id="Update dashboard title length"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.352014173Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=63.212µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.354369174Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.355045561Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=676.097µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.357092484Z level=info msg="Executing migration" id="create dashboard_provisioning"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.357692929Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=600.725µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.360159903Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.364165546Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.005433ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.36664286Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.367260596Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=617.816µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.370735146Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.371384892Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=648.966µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.3736081Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.374357169Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=748.679µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.377079209Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.377690815Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=609.756µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.379529072Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.380109307Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=580.485µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.382030427Z level=info msg="Executing migration" id="Add check_sum column"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.384028168Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.997401ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.385951108Z level=info msg="Executing migration" id="Add index for dashboard_title"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.386652856Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=702.448µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.389355585Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.389688224Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=335.959µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.392340432Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.392536587Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=196.465µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.396186082Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.396958291Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=772.229µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.399517747Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.401468228Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.950111ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.403340746Z level=info msg="Executing migration" id="create data_source table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.404189668Z level=info msg="Migration successfully executed" id="create data_source table" duration=849.352µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.407294748Z level=info msg="Executing migration" id="add index data_source.account_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.408071737Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=777.049µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.411195789Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.411911057Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=714.208µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.415466179Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded (PG_DEGRADED)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.416222138Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=756.419µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.418365793Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.419128783Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=763.009µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.4228829Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.428478334Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.571443ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.433586406Z level=info msg="Executing migration" id="create data_source table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.434471848Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=886.402µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.439144749Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.439923529Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=777.09µs
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.442950196Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.443629025Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=676.178µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.449931996Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.450510612Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=578.726µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.452129783Z level=info msg="Executing migration" id="Add column with_credentials"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.453969011Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.839158ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.45624333Z level=info msg="Executing migration" id="Add secure json data column"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.457995445Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.752755ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.462031549Z level=info msg="Executing migration" id="Update data_source table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.462139471Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=109.942µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.464551124Z level=info msg="Executing migration" id="Update initial version to 1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.464834821Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=283.567µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.468427343Z level=info msg="Executing migration" id="Add read_only data column"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.470855026Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.427263ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.472856758Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.473091244Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=234.696µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.475687831Z level=info msg="Executing migration" id="Update json_data with nulls"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.475901956Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=214.475µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.478367Z level=info msg="Executing migration" id="Add uid column"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.480243298Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.875568ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.481931721Z level=info msg="Executing migration" id="Update uid value"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.482132346Z level=info msg="Migration successfully executed" id="Update uid value" duration=200.815µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.484007485Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.484735994Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=725.628µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.486666473Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.487499665Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=832.782µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.490448411Z level=info msg="Executing migration" id="create api_key table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.49116123Z level=info msg="Migration successfully executed" id="create api_key table" duration=714.459µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.494369412Z level=info msg="Executing migration" id="add index api_key.account_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.495044939Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=674.917µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.497427951Z level=info msg="Executing migration" id="add index api_key.key"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.498090058Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=661.857µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.50050967Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.501218209Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=709.869µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.504432361Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.50514981Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=717.259µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.506984307Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.507648964Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=664.637µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.509762869Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.510442076Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=678.757µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.512961341Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.517705274Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.742753ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.524039507Z level=info msg="Executing migration" id="create api_key table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.524720845Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=681.939µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.527060244Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.527736782Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=676.558µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.53230682Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.533184713Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=877.823µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.536169199Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.536850306Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=680.627µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.542305177Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.542640666Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=335.449µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.545408207Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.546031124Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=622.846µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.550684103Z level=info msg="Executing migration" id="Update api_key table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.550744605Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=61.202µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.553326511Z level=info msg="Executing migration" id="Add expires to api_key table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.555256831Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.93036ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.557437577Z level=info msg="Executing migration" id="Add service account foreign key"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.559269854Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.834877ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.561841451Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.562499298Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=670.998µs
Nov 23 15:43:41 np0005532761 podman[99069]: 2025-11-23 20:43:41.563546744 +0000 UTC m=+0.052974505 container create 1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9 (image=quay.io/ceph/haproxy:2.3, name=intelligent_satoshi)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.564976871Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.567904787Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.927776ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.571069599Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.574276091Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.211652ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.57772692Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.578643634Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=916.484µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.581359144Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.582092503Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=734.119µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.584162356Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.585128821Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=963.065µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.587148233Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.588504248Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.355325ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.591213098Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.591894176Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=680.798µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.595279033Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.595928789Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=649.716µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.59870366Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.598746332Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=45.072µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.600917928Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.600938788Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=21.51µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.603607407Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.605629039Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.021342ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.607759124Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.609841058Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.081934ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.612490686Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.612546797Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=57.081µs
Nov 23 15:43:41 np0005532761 systemd[1]: Started libpod-conmon-1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9.scope.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.614664762Z level=info msg="Executing migration" id="create quota table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.615391521Z level=info msg="Migration successfully executed" id="create quota table v1" duration=726.819µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.617798653Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.618472981Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=673.748µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.620679417Z level=info msg="Executing migration" id="Update quota table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.620703728Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=24.941µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.622607607Z level=info msg="Executing migration" id="create plugin_setting table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.623351756Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=742.909µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.626141698Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.627148124Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.006006ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.630558402Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Nov 23 15:43:41 np0005532761 podman[99069]: 2025-11-23 20:43:41.540590683 +0000 UTC m=+0.030018474 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.63360008Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.041098ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.635874719Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.63590633Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=34.271µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.641929315Z level=info msg="Executing migration" id="create session table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.643083565Z level=info msg="Migration successfully executed" id="create session table" duration=1.153919ms
Nov 23 15:43:41 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.645675321Z level=info msg="Executing migration" id="Drop old table playlist table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.645772544Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=97.923µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.647691463Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.647782376Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=91.273µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.649710096Z level=info msg="Executing migration" id="create playlist table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.650519146Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=810.581µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.653067702Z level=info msg="Executing migration" id="create playlist item table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.653961785Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=893.353µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.65609475Z level=info msg="Executing migration" id="Update playlist table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.65612075Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=27.741µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.660560125Z level=info msg="Executing migration" id="Update playlist_item table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.660583936Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=24.761µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.663320427Z level=info msg="Executing migration" id="Add playlist column created_at"
Nov 23 15:43:41 np0005532761 podman[99069]: 2025-11-23 20:43:41.663737157 +0000 UTC m=+0.153164958 container init 1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9 (image=quay.io/ceph/haproxy:2.3, name=intelligent_satoshi)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.667012851Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.690624ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.66890214Z level=info msg="Executing migration" id="Add playlist column updated_at"
Nov 23 15:43:41 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.672871202Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.971292ms
Nov 23 15:43:41 np0005532761 podman[99069]: 2025-11-23 20:43:41.67355446 +0000 UTC m=+0.162982231 container start 1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9 (image=quay.io/ceph/haproxy:2.3, name=intelligent_satoshi)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.674985357Z level=info msg="Executing migration" id="drop preferences table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.675079049Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=94.452µs
Nov 23 15:43:41 np0005532761 podman[99069]: 2025-11-23 20:43:41.677148793 +0000 UTC m=+0.166576564 container attach 1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9 (image=quay.io/ceph/haproxy:2.3, name=intelligent_satoshi)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.677279846Z level=info msg="Executing migration" id="drop preferences table v3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.677369909Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=88.733µs
Nov 23 15:43:41 np0005532761 intelligent_satoshi[99086]: 0 0
Nov 23 15:43:41 np0005532761 systemd[1]: libpod-1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9.scope: Deactivated successfully.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.679280968Z level=info msg="Executing migration" id="create preferences table v3"
Nov 23 15:43:41 np0005532761 podman[99069]: 2025-11-23 20:43:41.679617796 +0000 UTC m=+0.169045557 container died 1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9 (image=quay.io/ceph/haproxy:2.3, name=intelligent_satoshi)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.68012243Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=841.343µs
Nov 23 15:43:41 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.684269407Z level=info msg="Executing migration" id="Update preferences table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.684324138Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=52.382µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.687441578Z level=info msg="Executing migration" id="Add column team_id in preferences"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.690014345Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.572527ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.692162599Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.692299353Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=137.214µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.694435358Z level=info msg="Executing migration" id="Add column week_start in preferences"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.698477073Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.042744ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.700672209Z level=info msg="Executing migration" id="Add column preferences.json_data"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.704667532Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.995903ms
Nov 23 15:43:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-6036c92ee6dd6a276e04485e2fe599a3b72d9286eb79ce24688b2d333bd2e773-merged.mount: Deactivated successfully.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.706507079Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.706575582Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=70.672µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.710565574Z level=info msg="Executing migration" id="Add preferences index org_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.71156441Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.002026ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.714777962Z level=info msg="Executing migration" id="Add preferences index user_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.71544645Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=666.308µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.718848107Z level=info msg="Executing migration" id="create alert table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.719774241Z level=info msg="Migration successfully executed" id="create alert table v1" duration=926.324µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.724912904Z level=info msg="Executing migration" id="add index alert org_id & id "
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.725731975Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=819.111µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.728846555Z level=info msg="Executing migration" id="add index alert state"
Nov 23 15:43:41 np0005532761 podman[99069]: 2025-11-23 20:43:41.729149493 +0000 UTC m=+0.218577244 container remove 1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9 (image=quay.io/ceph/haproxy:2.3, name=intelligent_satoshi)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.729601665Z level=info msg="Migration successfully executed" id="add index alert state" duration=754.599µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.732042318Z level=info msg="Executing migration" id="add index alert dashboard_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.732838758Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=796.55µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.735384643Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.73602631Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=642.287µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.739856008Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.740618459Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=759.74µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.744699534Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.745545425Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=849.971µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.748872101Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Nov 23 15:43:41 np0005532761 systemd[1]: libpod-conmon-1eaaf50a6e72d8f83ed9660b8f3707b5a536a2bca8092e2be8de82a37369dba9.scope: Deactivated successfully.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.75620101Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.327199ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.758060138Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.758664394Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=603.636µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.760383898Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.761106206Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=722.088µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.764488904Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.76473611Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=247.536µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.76942495Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.770136809Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=711.849µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.772281274Z level=info msg="Executing migration" id="create alert_notification table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.772924611Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=642.927µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.775474957Z level=info msg="Executing migration" id="Add column is_default"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.778062284Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.586917ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.779946722Z level=info msg="Executing migration" id="Add column frequency"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.782715233Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.768031ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.784506879Z level=info msg="Executing migration" id="Add column send_reminder"
Nov 23 15:43:41 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.787383903Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.877274ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.789102248Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.791570081Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.467564ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.79382192Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.794558838Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=736.888µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.798008728Z level=info msg="Executing migration" id="Update alert table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.798032798Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=24.79µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:41 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.800138352Z level=info msg="Executing migration" id="Update alert_notification table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.800157813Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=20.091µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.802600776Z level=info msg="Executing migration" id="create notification_journal table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.803256953Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=655.487µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.805578162Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.806369883Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=792.971µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.808465397Z level=info msg="Executing migration" id="drop alert_notification_journal"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.809263728Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=797.951µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.812398788Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.813091387Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=692.449µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.816760991Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.817448809Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=689.198µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.8206196Z level=info msg="Executing migration" id="Add for to alert table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.823358461Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.739231ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.82528644Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.828051262Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.764822ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.830401972Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.830555526Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=153.734µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.832553597Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.833259126Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=705.609µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.836451628Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.837397573Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=945.435µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.839641271Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.842253497Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.611816ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.844789393Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.844885365Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=99.632µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.851096516Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.852307367Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.211851ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.854342429Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.855086259Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=744.13µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.858580149Z level=info msg="Executing migration" id="Drop old annotation table v4"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.858668681Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=89.973µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.860649462Z level=info msg="Executing migration" id="create annotation table v5"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.861459123Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=809.371µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.865011715Z level=info msg="Executing migration" id="add index annotation 0 v3"
Nov 23 15:43:41 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.866941064Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.929539ms
Nov 23 15:43:41 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.870800054Z level=info msg="Executing migration" id="add index annotation 1 v3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.871455Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=654.706µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.874295063Z level=info msg="Executing migration" id="add index annotation 2 v3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.875103775Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=808.102µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.877187928Z level=info msg="Executing migration" id="add index annotation 3 v3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.878491201Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.302503ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.881101359Z level=info msg="Executing migration" id="add index annotation 4 v3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.882016233Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=914.304µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.886023866Z level=info msg="Executing migration" id="Update annotation table charset"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.886052427Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=29.881µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.888356316Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.892490092Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.131676ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.894958146Z level=info msg="Executing migration" id="Drop category_id index"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.895758777Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=800.201µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.897486131Z level=info msg="Executing migration" id="Add column tags to annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.901651199Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.164128ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.904211635Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.904921423Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=709.488µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.907349996Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.908170547Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=820.061µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.91140414Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.912250831Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=846.611µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.91410042Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.924355084Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=10.254174ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.92576295Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.926434968Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=671.688µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.928427069Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.929386424Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=957.165µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.93233027Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.932656748Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=326.288µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.93469126Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.935270395Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=579.155µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.937134193Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.937288717Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=154.604µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.940176291Z level=info msg="Executing migration" id="Add created time to annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.944029351Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.85022ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.945892079Z level=info msg="Executing migration" id="Add updated time to annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.949630715Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.738377ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.95136069Z level=info msg="Executing migration" id="Add index for created in annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.952163091Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=802.412µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.954706906Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.955513007Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=805.441µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.959255873Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.959449438Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=193.895µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.961106621Z level=info msg="Executing migration" id="Add epoch_end column"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.964971241Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.861649ms
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.967798433Z level=info msg="Executing migration" id="Add index for epoch_end"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.968638615Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=839.802µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.971240873Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.971390426Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=149.183µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.973779787Z level=info msg="Executing migration" id="Move region to single row"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.974067965Z level=info msg="Migration successfully executed" id="Move region to single row" duration=287.958µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.976072477Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.976790546Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=720.289µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.979025983Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.97968231Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=656.047µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.981312012Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.982064731Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=756.809µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.98515853Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.985846839Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=688.019µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.988134508Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.988845046Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=710.228µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.992392348Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.993177027Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=782.929µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.996490133Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.996556405Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=66.992µs
Nov 23 15:43:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:41.999633944Z level=info msg="Executing migration" id="create test_data table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.000335332Z level=info msg="Migration successfully executed" id="create test_data table" duration=701.378µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.003373801Z level=info msg="Executing migration" id="create dashboard_version table v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.004203042Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=829.131µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.007826525Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.008489722Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=660.937µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.013008478Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.014275201Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.267213ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.017551006Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.01769489Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=144.044µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.021250771Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.022034032Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=783.42µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.024717191Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.024780732Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=64.201µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.028325633Z level=info msg="Executing migration" id="create team table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.029494233Z level=info msg="Migration successfully executed" id="create team table" duration=1.16872ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.033726432Z level=info msg="Executing migration" id="add index team.org_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.034924083Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.197781ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.040086927Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.041259217Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.17116ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.043584156Z level=info msg="Executing migration" id="Add column uid in team"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.050241498Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.651532ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.052881776Z level=info msg="Executing migration" id="Update uid column values in team"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.053172724Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=295.278µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.057084825Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.067190735Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=10.10504ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.070421878Z level=info msg="Executing migration" id="create team member table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.071083606Z level=info msg="Migration successfully executed" id="create team member table" duration=661.988µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.073291582Z level=info msg="Executing migration" id="add index team_member.org_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.073956Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=664.358µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.076167346Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.076870055Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=702.609µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.078879846Z level=info msg="Executing migration" id="add index team_member.team_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.079539363Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=659.397µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.085049256Z level=info msg="Executing migration" id="Add column email to team table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.088838073Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.787418ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.090730582Z level=info msg="Executing migration" id="Add column external to team_member table"
Nov 23 15:43:42 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.094874809Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.144177ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.096548432Z level=info msg="Executing migration" id="Add column permission to team_member table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.099743604Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.195122ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.102004902Z level=info msg="Executing migration" id="create dashboard acl table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.102764741Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=759.949µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.105713898Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.106519779Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=805.761µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.109406133Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.110363858Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=957.445µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.112598556Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.113362305Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=763.459µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.116145197Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.116930217Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=783.22µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.11900041Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.119822272Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=821.772µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.122041269Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.122765218Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=723.81µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.128779372Z level=info msg="Executing migration" id="add index dashboard_permission"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.129713977Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=934.195µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.132475628Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.13296294Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=485.892µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.135359722Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.135549297Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=189.505µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.137683831Z level=info msg="Executing migration" id="create tag table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.138333679Z level=info msg="Migration successfully executed" id="create tag table" duration=649.878µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.150925493Z level=info msg="Executing migration" id="add index tag.key_value"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.152183066Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.259723ms
Nov 23 15:43:42 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:42 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.154890886Z level=info msg="Executing migration" id="create login attempt table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.155689446Z level=info msg="Migration successfully executed" id="create login attempt table" duration=798.2µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.158469797Z level=info msg="Executing migration" id="add index login_attempt.username"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.159221087Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=751.21µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.162075961Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.162870191Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=794.08µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.16556207Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.176674476Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.112506ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.178534785Z level=info msg="Executing migration" id="create login_attempt v2"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.179177691Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=638.266µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.18068695Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.18144433Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=757.54µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.183865662Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.18414408Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=278.848µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.185768911Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.186406167Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=636.976µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.191168841Z level=info msg="Executing migration" id="create user auth table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.191751335Z level=info msg="Migration successfully executed" id="create user auth table" duration=582.834µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.195703357Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.196526778Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=823.681µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.199017342Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.199090544Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=73.882µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.201384984Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.205210403Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.825498ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.207162693Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.211056783Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.893871ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.21289919Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.216638747Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.739397ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.218257939Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.222065277Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.800099ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.22375984Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.224571081Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=809.971µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.226991513Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.230565086Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.583153ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.232767972Z level=info msg="Executing migration" id="create server_lock table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.233486781Z level=info msg="Migration successfully executed" id="create server_lock table" duration=718.809µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.236273803Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.237177836Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=903.793µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.240451151Z level=info msg="Executing migration" id="create user auth token table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.241180409Z level=info msg="Migration successfully executed" id="create user auth token table" duration=729.268µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.245173862Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.245980013Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=806.131µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.248758885Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.249557015Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=797.92µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.252116091Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.252989474Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=873.403µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.255297003Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.259112831Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.815768ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.260744463Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.261537854Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=793.141µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.265596179Z level=info msg="Executing migration" id="create cache_data table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.266329437Z level=info msg="Migration successfully executed" id="create cache_data table" duration=735.808µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.268590066Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.269377166Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=787.049µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.271586463Z level=info msg="Executing migration" id="create short_url table v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.272407624Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=821.591µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.275620867Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.276672104Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.051377ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.279398975Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.279487147Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=89.332µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.282005652Z level=info msg="Executing migration" id="delete alert_definition table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.282135285Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=131.513µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.284707731Z level=info msg="Executing migration" id="recreate alert_definition table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.285553493Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=846.062µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.287881743Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.288863548Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=986.045µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.291872596Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.292843531Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=927.523µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.296361222Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.296440864Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=80.582µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.29865076Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.299530843Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=876.593µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.301108574Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.302011767Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=903.163µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.30366814Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.304659146Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=993.086µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.306192945Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.307032216Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=839.261µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.308506074Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.312669561Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.161577ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.315424492Z level=info msg="Executing migration" id="drop alert_definition table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.316383397Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=958.995µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.318644386Z level=info msg="Executing migration" id="delete alert_definition_version table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.318739928Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=92.732µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.322152176Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.322942417Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=790.221µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.325929364Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.326686303Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=757.799µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.328282695Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.329058474Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=775.649µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.330759668Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.330819169Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=59.781µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.332543474Z level=info msg="Executing migration" id="drop alert_definition_version table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.333400886Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=857.402µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.335527411Z level=info msg="Executing migration" id="create alert_instance table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.33628323Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=755.369µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.339494954Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.34053815Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.042307ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.342309886Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.343120046Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=809.89µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.345645472Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.350109017Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.462744ms
Nov 23 15:43:42 np0005532761 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.pteysg for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.352097878Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.353090413Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=992.515µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.354987903Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.355743462Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=755.639µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.357352073Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.379191526Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=21.834973ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.380784547Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.401401519Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=20.601301ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.403633506Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.404442157Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=808.491µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.406142481Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.406895971Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=752.909µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.409131507Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.413007368Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.875531ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.414716411Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.4185255Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.810599ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.420199313Z level=info msg="Executing migration" id="create alert_rule table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.420960533Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=761.3µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.423485438Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.424414651Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=928.833µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.42708009Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.427850481Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=770.301µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.431471623Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.432429398Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=958.295µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.436226746Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.436279758Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=53.682µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.438028403Z level=info msg="Executing migration" id="add column for to alert_rule"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.442095358Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.066774ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.443843932Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: Deploying daemon haproxy.rgw.default.compute-0.pteysg on compute-0
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: Health check failed: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded (PG_DEGRADED)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.447718582Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=3.8745ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.44997241Z level=info msg="Executing migration" id="add column labels to alert_rule"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.454027965Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.055715ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.457400702Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.458154651Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=753.829µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.460168063Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.460958403Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=790.12µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.463252793Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.467274156Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.018863ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.469871764Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.473930868Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.058854ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.475787336Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.476635168Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=847.582µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.478881445Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.483044313Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.162638ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.484635293Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.488673428Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.035805ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.490778603Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.490842004Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=63.802µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.494919469Z level=info msg="Executing migration" id="create alert_rule_version table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.495820502Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=900.853µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.498839239Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.49960314Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=763.661µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.502128105Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.503060978Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=932.713µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.505203514Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.505248685Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=45.441µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.507275307Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.511517466Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.241639ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.513616031Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.517945782Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.329821ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.524685826Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.529081409Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.394933ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.530957847Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.535525706Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.565769ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.53766609Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.542347001Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.680751ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.544107817Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.544152838Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=45.621µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.545881742Z level=info msg="Executing migration" id=create_alert_configuration_table
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.546465747Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=583.695µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.548683155Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.553320224Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.63579ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.556256669Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.556354292Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=99.073µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.558868647Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.563349882Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.481575ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.57644792Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.577335882Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=888.872µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.580174036Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.584615871Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.442716ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.587090504Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Nov 23 15:43:42 np0005532761 podman[99230]: 2025-11-23 20:43:42.587543846 +0000 UTC m=+0.041763167 container create 201b5c6239eafbc99bf150f0203f84d5e58ad1bce75313c49fa06546fb0041ad (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-rgw-default-compute-0-pteysg)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.587786622Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=695.958µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.593231223Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.594052073Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=824.66µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.596892277Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.601238769Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.345883ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.603110057Z level=info msg="Executing migration" id="create provenance_type table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.603718203Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=605.925µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.605969861Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.60671949Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=749.369µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.608848435Z level=info msg="Executing migration" id="create alert_image table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.60945878Z level=info msg="Migration successfully executed" id="create alert_image table" duration=610.065µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.611581466Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:42 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.612480459Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=899.093µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.615145747Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.615196508Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=51.831µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.617648962Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.618488614Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=840.062µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.623385529Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.624190091Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=804.342µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.626423788Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.626725376Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.629436516Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.629837766Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=401.05µs
Nov 23 15:43:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a385e51496dadf191742114962045a66fcf34df5e64a20e125962f83f4c396/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.63194583Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.63270907Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=763.54µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.634970949Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Nov 23 15:43:42 np0005532761 podman[99230]: 2025-11-23 20:43:42.64009371 +0000 UTC m=+0.094313041 container init 201b5c6239eafbc99bf150f0203f84d5e58ad1bce75313c49fa06546fb0041ad (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-rgw-default-compute-0-pteysg)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.640388047Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.407818ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.64318801Z level=info msg="Executing migration" id="create library_element table v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.644130804Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=943.094µs
Nov 23 15:43:42 np0005532761 podman[99230]: 2025-11-23 20:43:42.645389847 +0000 UTC m=+0.099609188 container start 201b5c6239eafbc99bf150f0203f84d5e58ad1bce75313c49fa06546fb0041ad (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-rgw-default-compute-0-pteysg)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.646649379Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.647663016Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.013367ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.651323219Z level=info msg="Executing migration" id="create library_element_connection table v1"
Nov 23 15:43:42 np0005532761 bash[99230]: 201b5c6239eafbc99bf150f0203f84d5e58ad1bce75313c49fa06546fb0041ad
Nov 23 15:43:42 np0005532761 podman[99230]: 2025-11-23 20:43:42.571161484 +0000 UTC m=+0.025380825 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.652037188Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=714.029µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.655122988Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.656442052Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.318574ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-rgw-default-compute-0-pteysg[99245]: [NOTICE] 326/204342 (2) : New worker #1 (4) forked
Nov 23 15:43:42 np0005532761 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.pteysg for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.659704135Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.660560338Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=856.433µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.663430352Z level=info msg="Executing migration" id="increase max description length to 2048"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.663460083Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=31.041µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.666563102Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.666619484Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=57.282µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.67114197Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.671394457Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=253.177µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.673753208Z level=info msg="Executing migration" id="create data_keys table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.6746133Z level=info msg="Migration successfully executed" id="create data_keys table" duration=857.693µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.677573656Z level=info msg="Executing migration" id="create secrets table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.678213432Z level=info msg="Migration successfully executed" id="create secrets table" duration=639.766µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.681255182Z level=info msg="Executing migration" id="rename data_keys name column to id"
Nov 23 15:43:42 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 23 15:43:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000051s ======
Nov 23 15:43:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:42.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 23 15:43:42 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.709975221Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.715129ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.711952763Z level=info msg="Executing migration" id="add name column into data_keys"
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.718944143Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.98733ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.721835377Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.722062313Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=228.796µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.724361352Z level=info msg="Executing migration" id="rename data_keys name column to label"
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 23 15:43:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:42 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.tmivar on compute-2
Nov 23 15:43:42 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.tmivar on compute-2
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.752955329Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=28.588607ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.75493566Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.780491609Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=25.553939ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.782381368Z level=info msg="Executing migration" id="create kv_store table v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.783189198Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=807.53µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.785657422Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.786549995Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=892.473µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.789190803Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.78945634Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=265.637µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.792356935Z level=info msg="Executing migration" id="create permission table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.793164856Z level=info msg="Migration successfully executed" id="create permission table" duration=809.762µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.795896396Z level=info msg="Executing migration" id="add unique index permission.role_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.796730897Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=834.721µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.798945135Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.799834717Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=889.332µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.80187067Z level=info msg="Executing migration" id="create role table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.802604198Z level=info msg="Migration successfully executed" id="create role table" duration=733.238µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.80692682Z level=info msg="Executing migration" id="add column display_name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.812266637Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.337957ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.814660279Z level=info msg="Executing migration" id="add column group_name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.819833473Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.173354ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.82165898Z level=info msg="Executing migration" id="add index role.org_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.822515012Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=856.032µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.825570871Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.826468543Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=897.392µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.828780083Z level=info msg="Executing migration" id="add index role_org_id_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.829673857Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=893.194µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.832270264Z level=info msg="Executing migration" id="create team role table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.833056513Z level=info msg="Migration successfully executed" id="create team role table" duration=786.479µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.835528367Z level=info msg="Executing migration" id="add index team_role.org_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.836644406Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.116239ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.839074258Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.840045924Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=970.625µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.842551008Z level=info msg="Executing migration" id="add index team_role.team_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.84341Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=859.332µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.845900564Z level=info msg="Executing migration" id="create user role table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.846680905Z level=info msg="Migration successfully executed" id="create user role table" duration=778.071µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.84885782Z level=info msg="Executing migration" id="add index user_role.org_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.849741844Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=884.353µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.852089044Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.852966316Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=875.202µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.855003249Z level=info msg="Executing migration" id="add index user_role.user_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.855882701Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=879.562µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.858046278Z level=info msg="Executing migration" id="create builtin role table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.858929911Z level=info msg="Migration successfully executed" id="create builtin role table" duration=881.903µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.86125843Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.862150073Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=891.013µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.864408241Z level=info msg="Executing migration" id="add index builtin_role.name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.865291494Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=883.143µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.868538638Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.87561663Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.069753ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.877743715Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.878727311Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=983.806µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.881250415Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.882163869Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=913.374µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.884411647Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.88532226Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=909.063µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.886943473Z level=info msg="Executing migration" id="add unique index role.uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.887868336Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=924.443µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.88958749Z level=info msg="Executing migration" id="create seed assignment table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.890398281Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=808.971µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.892577738Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.893625094Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.047326ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.89618066Z level=info msg="Executing migration" id="add column hidden to role table"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.901955359Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.774519ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.903622352Z level=info msg="Executing migration" id="permission kind migration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.909409372Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.78653ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.910927091Z level=info msg="Executing migration" id="permission attribute migration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.916642398Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.713447ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.918408233Z level=info msg="Executing migration" id="permission identifier migration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.923917615Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.508652ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.925698131Z level=info msg="Executing migration" id="add permission identifier index"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.926607425Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=909.014µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.929478899Z level=info msg="Executing migration" id="add permission action scope role_id index"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.930664259Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.1854ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.933146614Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.934242681Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.096048ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.936151211Z level=info msg="Executing migration" id="create query_history table v1"
Nov 23 15:43:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 active+recovering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 66 op/s; 3/226 objects degraded (1.327%); 2/226 objects misplaced (0.885%); 174 B/s, 9 objects/s recovering
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.937080614Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=929.313µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.939468066Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.940523173Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.055027ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.943264454Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.943351986Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=87.042µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.946298172Z level=info msg="Executing migration" id="rbac disabled migrator"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.946334373Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=37.581µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.947931694Z level=info msg="Executing migration" id="teams permissions migration"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.948290003Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=359.229µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.949907115Z level=info msg="Executing migration" id="dashboard permissions"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.950319446Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=412.891µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.951940878Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.952590984Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=649.637µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:42 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.95552254Z level=info msg="Executing migration" id="drop managed folder create actions"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.955768236Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=244.116µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.957637254Z level=info msg="Executing migration" id="alerting notification permissions"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.958153397Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=516.083µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.960026536Z level=info msg="Executing migration" id="create query_history_star table v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.960880828Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=853.871µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.96330755Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.964167323Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=859.483µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.966189055Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.971962553Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.772798ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.973483292Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.973527123Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=44.311µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.974996791Z level=info msg="Executing migration" id="create correlation table v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.975858054Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=861.063µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.978283287Z level=info msg="Executing migration" id="add index correlations.uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.979084457Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=799.15µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.983447779Z level=info msg="Executing migration" id="add index correlations.source_uid"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.984284231Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=835.962µs
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.986344585Z level=info msg="Executing migration" id="add correlation config column"
Nov 23 15:43:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:43:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.992503173Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.158048ms
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.994227067Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Nov 23 15:43:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.995071609Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=845.112µs
Nov 23 15:43:42 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.997134062Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Nov 23 15:43:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:42.998073036Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=938.974µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.000554601Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Nov 23 15:43:43 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:43:43 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.017871457Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=17.307765ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.019744445Z level=info msg="Executing migration" id="create correlation v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.020749441Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.005357ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.022535677Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.023384189Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=848.403µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.025497744Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.026374216Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=876.492µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.029413754Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.030313787Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=900.643µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.032470412Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.032690379Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=217.997µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.034113836Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.034902125Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=788.999µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.036507057Z level=info msg="Executing migration" id="add provisioning column"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.042245684Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.738357ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.044123383Z level=info msg="Executing migration" id="create entity_events table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.044856582Z level=info msg="Migration successfully executed" id="create entity_events table" duration=733.419µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.046393421Z level=info msg="Executing migration" id="create dashboard public config v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.047209043Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=815.902µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.05095635Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.051315669Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.053256408Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.053634099Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.055306602Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.056114632Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=808.19µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.057988501Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.058878333Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=889.672µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.061367028Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.062299292Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=930.404µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.064045597Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.065104494Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.058737ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.06727634Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.068386048Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.107608ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.072209267Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.073100241Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=890.533µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.074733703Z level=info msg="Executing migration" id="Drop public config table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.075487291Z level=info msg="Migration successfully executed" id="Drop public config table" duration=753.609µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.07699996Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.077833732Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=833.762µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.079879594Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.080722137Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=842.673µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.082397599Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.083264832Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=867.393µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.085152841Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.086001393Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=848.942µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.088936838Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.108735239Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=19.79562ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.110505454Z level=info msg="Executing migration" id="add annotations_enabled column"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.116622082Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.114488ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.118011188Z level=info msg="Executing migration" id="add time_selection_enabled column"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.123843258Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.866391ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.125702686Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.125924911Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=222.985µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.127309527Z level=info msg="Executing migration" id="add share column"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.132920822Z level=info msg="Migration successfully executed" id="add share column" duration=5.611285ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.134290667Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.134456181Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=165.894µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.13673811Z level=info msg="Executing migration" id="create file table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.13750577Z level=info msg="Migration successfully executed" id="create file table" duration=765.77µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.13945595Z level=info msg="Executing migration" id="file table idx: path natural pk"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.140332613Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=875.833µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.14216274Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.142992891Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=831.971µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.148666167Z level=info msg="Executing migration" id="create file_meta table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.149436867Z level=info msg="Migration successfully executed" id="create file_meta table" duration=770.77µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.151417948Z level=info msg="Executing migration" id="file table idx: path key"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.152301991Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=884.383µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.154300813Z level=info msg="Executing migration" id="set path collation in file table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.154344364Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=43.721µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.155939685Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.155983707Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=45.491µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.157317751Z level=info msg="Executing migration" id="managed permissions migration"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.15767904Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=361.219µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.159414344Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.159565348Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=151.174µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.161066887Z level=info msg="Executing migration" id="RBAC action name migrator"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.162122504Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.056127ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.163550131Z level=info msg="Executing migration" id="Add UID column to playlist"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.169682339Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.131588ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.171141997Z level=info msg="Executing migration" id="Update uid column values in playlist"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.171293351Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=151.444µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.173148059Z level=info msg="Executing migration" id="Add index for uid in playlist"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.1743407Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.194891ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.176640288Z level=info msg="Executing migration" id="update group index for alert rules"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.1770612Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=421.682µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.178758153Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.178937468Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=181.455µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.180610641Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.181061173Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=450.602µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.183120345Z level=info msg="Executing migration" id="add action column to seed_assignment"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.189315806Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.195251ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.190796994Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.197307711Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.507067ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.199040045Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.200041941Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.002416ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.201764736Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Nov 23 15:43:43 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 27 completed events
Nov 23 15:43:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:43:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532761 ceph-mgr[74869]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.272783236Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=71.01148ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.274584432Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.27565213Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.067568ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.277454886Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.278311459Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=856.193µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.280740252Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.302220725Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.475963ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.305276483Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.311912135Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.634492ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.313399353Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.313624749Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=225.286µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.315346464Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.315477887Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=131.243µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.317281643Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.317448678Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=166.734µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.319282364Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.319435208Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=152.934µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.323512983Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.323669957Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=157.084µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.325205748Z level=info msg="Executing migration" id="create folder table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.325970767Z level=info msg="Migration successfully executed" id="create folder table" duration=764.659µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.327601479Z level=info msg="Executing migration" id="Add index for parent_uid"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.328670947Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.069088ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.330962856Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.331879339Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=915.893µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.334192789Z level=info msg="Executing migration" id="Update folder title length"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.33421728Z level=info msg="Migration successfully executed" id="Update folder title length" duration=27.211µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.335987175Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.336907689Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=920.464µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.339483346Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.340335287Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=850.041µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.342134524Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.343137969Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.002705ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.346264371Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.34664076Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=376.499µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.34935463Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.349575276Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=220.246µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.351364561Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.352272355Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=907.284µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.354052511Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.35517636Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.123549ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.357970102Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.359079931Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.109549ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.360783034Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.361866603Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.083438ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.36369897Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.364664255Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=970.886µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.366522342Z level=info msg="Executing migration" id="create anon_device table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.367354504Z level=info msg="Migration successfully executed" id="create anon_device table" duration=831.332µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.369370996Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.370514965Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.144289ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.373182374Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.374048406Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=866.152µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.376386196Z level=info msg="Executing migration" id="create signing_key table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.377156817Z level=info msg="Migration successfully executed" id="create signing_key table" duration=770.131µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.379509407Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.380374839Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=865.462µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.383299355Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.384405863Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.105908ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.386096057Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.386320482Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=221.455µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.388610012Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.39477665Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.163628ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.397168322Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.397734456Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=566.894µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.399502683Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.400363604Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=861.071µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.402681414Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.403639409Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=957.545µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.405089527Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.405959469Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=869.991µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.407395005Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.40832933Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=933.845µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.410397953Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.411325067Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=927.174µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.412837296Z level=info msg="Executing migration" id="create sso_setting table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.413691558Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=855.472µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.416526891Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.417159517Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=632.956µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.418779539Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.419017536Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=238.557µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.421582941Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.421630182Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=47.131µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.423797198Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.430156493Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.359435ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.431651491Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.438063966Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.412365ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.439327509Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.439609896Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=263.396µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=migrator t=2025-11-23T20:43:43.441076173Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.399719077s
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=sqlstore t=2025-11-23T20:43:43.442130121Z level=info msg="Created default organization"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=secrets t=2025-11-23T20:43:43.443788683Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Nov 23 15:43:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=plugin.store t=2025-11-23T20:43:43.468800858Z level=info msg="Loading plugins..."
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=local.finder t=2025-11-23T20:43:43.560178023Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=plugin.store t=2025-11-23T20:43:43.560207414Z level=info msg="Plugins loaded" count=55 duration=91.407916ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=query_data t=2025-11-23T20:43:43.562855282Z level=info msg="Query Service initialization"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=live.push_http t=2025-11-23T20:43:43.565840539Z level=info msg="Live Push Gateway initialization"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.migration t=2025-11-23T20:43:43.568520568Z level=info msg=Starting
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.migration t=2025-11-23T20:43:43.568886118Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.migration orgID=1 t=2025-11-23T20:43:43.569239157Z level=info msg="Migrating alerts for organisation"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.migration orgID=1 t=2025-11-23T20:43:43.569897743Z level=info msg="Alerts found to migrate" alerts=0
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.migration t=2025-11-23T20:43:43.571360701Z level=info msg="Completed alerting migration"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.state.manager t=2025-11-23T20:43:43.58873449Z level=info msg="Running in alternative execution of Error/NoData mode"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=infra.usagestats.collector t=2025-11-23T20:43:43.590368561Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=provisioning.datasources t=2025-11-23T20:43:43.591256565Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=provisioning.alerting t=2025-11-23T20:43:43.6004168Z level=info msg="starting to provision alerting"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=provisioning.alerting t=2025-11-23T20:43:43.600437801Z level=info msg="finished to provision alerting"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.state.manager t=2025-11-23T20:43:43.600696717Z level=info msg="Warming state cache for startup"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.multiorg.alertmanager t=2025-11-23T20:43:43.600835791Z level=info msg="Starting MultiOrg Alertmanager"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.state.manager t=2025-11-23T20:43:43.601102888Z level=info msg="State cache has been initialized" states=0 duration=406.651µs
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ngalert.scheduler t=2025-11-23T20:43:43.6011475Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ticker t=2025-11-23T20:43:43.601197351Z level=info msg=starting first_tick=2025-11-23T20:43:50Z
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=grafanaStorageLogger t=2025-11-23T20:43:43.601772075Z level=info msg="Storage starting"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=http.server t=2025-11-23T20:43:43.605793449Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=http.server t=2025-11-23T20:43:43.60622665Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=sqlstore.transactions t=2025-11-23T20:43:43.640228527Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=provisioning.dashboard t=2025-11-23T20:43:43.659319728Z level=info msg="starting to provision dashboards"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=plugins.update.checker t=2025-11-23T20:43:43.680054363Z level=info msg="Update check succeeded" duration=79.094578ms
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=sqlstore.transactions t=2025-11-23T20:43:43.701555257Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=grafana.update.checker t=2025-11-23T20:43:43.706348321Z level=info msg="Update check succeeded" duration=105.787337ms
Nov 23 15:43:43 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=sqlstore.transactions t=2025-11-23T20:43:43.714526972Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 23 15:43:43 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:43 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe580016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=grafana-apiserver t=2025-11-23T20:43:43.841238197Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=grafana-apiserver t=2025-11-23T20:43:43.841963606Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=provisioning.dashboard t=2025-11-23T20:43:43.917730129Z level=info msg="finished to provision dashboards"
Nov 23 15:43:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204343 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: Deploying daemon haproxy.rgw.default.compute-2.tmivar on compute-2
Nov 23 15:43:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:44 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:44 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 23 15:43:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:44.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:44 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.746747) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930624746950, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7362, "num_deletes": 250, "total_data_size": 13980980, "memory_usage": 14686544, "flush_reason": "Manual Compaction"}
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930624858796, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12628767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 7499, "table_properties": {"data_size": 12601632, "index_size": 17484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 82377, "raw_average_key_size": 24, "raw_value_size": 12535548, "raw_average_value_size": 3668, "num_data_blocks": 771, "num_entries": 3417, "num_filter_entries": 3417, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930338, "oldest_key_time": 1763930338, "file_creation_time": 1763930624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 112232 microseconds, and 21993 cpu microseconds.
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.858982) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12628767 bytes OK
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.859042) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.874315) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.874351) EVENT_LOG_v1 {"time_micros": 1763930624874345, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.874379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13948000, prev total WAL file size 13948000, number of live WAL files 2.
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.877642) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(12MB) 13(57KB) 8(1944B)]
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930624877756, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12689197, "oldest_snapshot_seqno": -1}
Nov 23 15:43:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:44.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 active+recovering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3/226 objects degraded (1.327%); 2/226 objects misplaced (0.885%); 0 B/s, 0 objects/s recovering
Nov 23 15:43:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:44 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3240 keys, 12671247 bytes, temperature: kUnknown
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625022650, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12671247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12644414, "index_size": 17635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 81373, "raw_average_key_size": 25, "raw_value_size": 12579885, "raw_average_value_size": 3882, "num_data_blocks": 778, "num_entries": 3240, "num_filter_entries": 3240, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763930624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:45.023167) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12671247 bytes
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:45.025392) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.4 rd, 87.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(12.1, 0.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3526, records dropped: 286 output_compression: NoCompression
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:45.025437) EVENT_LOG_v1 {"time_micros": 1763930625025410, "job": 4, "event": "compaction_finished", "compaction_time_micros": 145154, "compaction_time_cpu_micros": 30538, "output_level": 6, "num_output_files": 1, "total_output_size": 12671247, "num_input_records": 3526, "num_output_records": 3240, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625030296, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625030414, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625030539, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:44.877550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:45 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:45 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:45 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:45 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.xymmfk on compute-0
Nov 23 15:43:45 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.xymmfk on compute-0
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 23 15:43:45 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 23 15:43:45 np0005532761 podman[99355]: 2025-11-23 20:43:45.798864671 +0000 UTC m=+0.042919547 container create f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb (image=quay.io/ceph/keepalived:2.2.4, name=adoring_goldwasser, io.openshift.expose-services=, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived)
Nov 23 15:43:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:45 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe7c0041f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:45 np0005532761 systemd[1]: Started libpod-conmon-f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb.scope.
Nov 23 15:43:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:45 np0005532761 podman[99355]: 2025-11-23 20:43:45.777542942 +0000 UTC m=+0.021597848 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 15:43:45 np0005532761 podman[99355]: 2025-11-23 20:43:45.88493024 +0000 UTC m=+0.128985156 container init f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb (image=quay.io/ceph/keepalived:2.2.4, name=adoring_goldwasser, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, com.redhat.component=keepalived-container)
Nov 23 15:43:45 np0005532761 podman[99355]: 2025-11-23 20:43:45.891460237 +0000 UTC m=+0.135515103 container start f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb (image=quay.io/ceph/keepalived:2.2.4, name=adoring_goldwasser, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, name=keepalived, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, description=keepalived for Ceph)
Nov 23 15:43:45 np0005532761 podman[99355]: 2025-11-23 20:43:45.895237935 +0000 UTC m=+0.139292861 container attach f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb (image=quay.io/ceph/keepalived:2.2.4, name=adoring_goldwasser, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, release=1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container)
Nov 23 15:43:45 np0005532761 adoring_goldwasser[99372]: 0 0
Nov 23 15:43:45 np0005532761 systemd[1]: libpod-f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb.scope: Deactivated successfully.
Nov 23 15:43:45 np0005532761 conmon[99372]: conmon f41eeeedbc34d687702d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb.scope/container/memory.events
Nov 23 15:43:45 np0005532761 podman[99355]: 2025-11-23 20:43:45.898768456 +0000 UTC m=+0.142823342 container died f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb (image=quay.io/ceph/keepalived:2.2.4, name=adoring_goldwasser, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Nov 23 15:43:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-aa54e1355517ff80b452db70ff94c62d88cb7aa4bc6eb8674139bae3c5ccb5a2-merged.mount: Deactivated successfully.
Nov 23 15:43:45 np0005532761 podman[99355]: 2025-11-23 20:43:45.935644296 +0000 UTC m=+0.179699162 container remove f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb (image=quay.io/ceph/keepalived:2.2.4, name=adoring_goldwasser, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, version=2.2.4)
Nov 23 15:43:45 np0005532761 systemd[1]: libpod-conmon-f41eeeedbc34d687702d96f894cf42c9e48ad13fdd326f141bb5bca04d86b8cb.scope: Deactivated successfully.
Nov 23 15:43:45 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:46 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:46 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:46 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:46 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:46 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: Deploying daemon keepalived.rgw.default.compute-0.xymmfk on compute-0
Nov 23 15:43:46 np0005532761 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.xymmfk for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:46 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:46.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:46 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 23 15:43:46 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 23 15:43:46 np0005532761 podman[99520]: 2025-11-23 20:43:46.782850461 +0000 UTC m=+0.040414653 container create c1d7f26510fc537e2d2062d11f2624de7dd6bbbd8bce087f05cb12febb6aee07 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk, vcs-type=git, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, architecture=x86_64, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 15:43:46 np0005532761 systemd-logind[820]: New session 37 of user zuul.
Nov 23 15:43:46 np0005532761 systemd[1]: Started Session 37 of User zuul.
Nov 23 15:43:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18acd5ddaff9379eb0923fc794a559557124aa0d19e32a6dd0cf0d899913a91f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:46 np0005532761 podman[99520]: 2025-11-23 20:43:46.832456679 +0000 UTC m=+0.090020891 container init c1d7f26510fc537e2d2062d11f2624de7dd6bbbd8bce087f05cb12febb6aee07 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Nov 23 15:43:46 np0005532761 podman[99520]: 2025-11-23 20:43:46.836785742 +0000 UTC m=+0.094349934 container start c1d7f26510fc537e2d2062d11f2624de7dd6bbbd8bce087f05cb12febb6aee07 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64)
Nov 23 15:43:46 np0005532761 bash[99520]: c1d7f26510fc537e2d2062d11f2624de7dd6bbbd8bce087f05cb12febb6aee07
Nov 23 15:43:46 np0005532761 podman[99520]: 2025-11-23 20:43:46.764507068 +0000 UTC m=+0.022071300 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 15:43:46 np0005532761 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.xymmfk for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: Starting VRRP child process, pid=4
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: Startup complete
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:46 2025: (VI_0) Entering BACKUP STATE
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: (VI_0) Entering BACKUP STATE (init)
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:46 2025: VRRP_Script(check_backend) succeeded
Nov 23 15:43:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:46.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:46 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:46 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:46 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:46 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:46 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.zjypck on compute-2
Nov 23 15:43:46 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.zjypck on compute-2
Nov 23 15:43:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v80: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 keys/s, 0 objects/s recovering
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Nov 23 15:43:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 15:43:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:46 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb[97681]: Sun Nov 23 20:43:47 2025: (VI_0) Entering MASTER STATE
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 15:43:47 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 23 15:43:47 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 23 15:43:47 np0005532761 python3.9[99694]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:43:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:47 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded)
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 23 15:43:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 23 15:43:48 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 4b90b902-0966-422f-9d0b-d57bb315c1c4 (Global Recovery Event) in 5 seconds
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: Deploying daemon keepalived.rgw.default.compute-2.zjypck on compute-2
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded)
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: Cluster is now healthy
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 15:43:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:48 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:43:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:43:48 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 23 15:43:48 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 23 15:43:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:43:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:48.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:43:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 keys/s, 0 objects/s recovering
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 23 15:43:48 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 23 15:43:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:48 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:48 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 70 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=70) [1] r=0 lpr=70 pi=[63,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 70 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=70) [1] r=0 lpr=70 pi=[63,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 70 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=70) [1] r=0 lpr=70 pi=[63,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 70 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=70) [1] r=0 lpr=70 pi=[63,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.132627) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629132657, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 417, "num_deletes": 251, "total_data_size": 247598, "memory_usage": 256856, "flush_reason": "Manual Compaction"}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629136769, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 244168, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7500, "largest_seqno": 7916, "table_properties": {"data_size": 241580, "index_size": 624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6752, "raw_average_key_size": 19, "raw_value_size": 236093, "raw_average_value_size": 672, "num_data_blocks": 26, "num_entries": 351, "num_filter_entries": 351, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930625, "oldest_key_time": 1763930625, "file_creation_time": 1763930629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 4221 microseconds, and 2251 cpu microseconds.
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.136844) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 244168 bytes OK
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.136864) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.138298) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.138322) EVENT_LOG_v1 {"time_micros": 1763930629138315, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.138337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 244872, prev total WAL file size 244872, number of live WAL files 2.
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.138727) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(238KB)], [20(12MB)]
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629138756, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12915415, "oldest_snapshot_seqno": -1}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev 43e1f0b8-48fc-4af3-988f-0ebf3e76eef0 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 23 15:43:49 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event 43e1f0b8-48fc-4af3-988f-0ebf3e76eef0 (Updating ingress.rgw.default deployment (+4 -> 4)) in 8 seconds
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-mgr[74869]: [progress INFO root] update: starting ev a62f882d-244a-4376-98e9-666dc78c294b (Updating prometheus deployment (+1 -> 1))
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3071 keys, 11707280 bytes, temperature: kUnknown
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629266955, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11707280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11682467, "index_size": 16064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 79181, "raw_average_key_size": 25, "raw_value_size": 11621528, "raw_average_value_size": 3784, "num_data_blocks": 702, "num_entries": 3071, "num_filter_entries": 3071, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763930629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.267248) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11707280 bytes
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.269186) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.7 rd, 91.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(100.8) write-amplify(47.9) OK, records in: 3591, records dropped: 520 output_compression: NoCompression
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.269211) EVENT_LOG_v1 {"time_micros": 1763930629269200, "job": 6, "event": "compaction_finished", "compaction_time_micros": 128288, "compaction_time_cpu_micros": 34755, "output_level": 6, "num_output_files": 1, "total_output_size": 11707280, "num_input_records": 3591, "num_output_records": 3071, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629269407, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629272195, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.138683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.272254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.272259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.272261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.272262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:43:49.272263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Nov 23 15:43:49 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 23 15:43:49 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:49 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 71 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=71) [1]/[2] r=-1 lpr=71 pi=[63,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:49 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-rgw-default-compute-0-xymmfk[99536]: Sun Nov 23 20:43:50 2025: (VI_0) Entering MASTER STATE
Nov 23 15:43:50 np0005532761 ceph-mon[74569]: Deploying daemon prometheus.compute-0 on compute-0
Nov 23 15:43:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:50 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:50 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 23 15:43:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:50.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:50 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 23 15:43:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 23 15:43:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 23 15:43:50 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 23 15:43:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:43:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:50.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:43:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 4 remapped+peering, 4 active+remapped, 329 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 156 B/s, 7 objects/s recovering
Nov 23 15:43:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:50 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:51 np0005532761 python3.9[100085]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 23 15:43:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 23 15:43:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:51 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 23 15:43:51 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.5( v 72'1004 (0'0,72'1004] local-lis/les=0/0 n=6 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 luod=0'0 crt=65'1001 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 73 pg[10.5( v 72'1004 (0'0,72'1004] local-lis/les=0/0 n=6 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=65'1001 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:52 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:43:52 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Nov 23 15:43:52 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Nov 23 15:43:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:52 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:43:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:52.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:43:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 23 15:43:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:52.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 23 15:43:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 4 remapped+peering, 4 active+remapped, 329 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 156 B/s, 7 objects/s recovering
Nov 23 15:43:52 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 23 15:43:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:52 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:53 np0005532761 ceph-mgr[74869]: [progress INFO root] Writing back 29 completed events
Nov 23 15:43:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 15:43:53 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 74 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=5 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:53 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 74 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=6 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:53 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 74 pg[10.5( v 72'1004 (0'0,72'1004] local-lis/les=73/74 n=6 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=72'1004 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:53 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 23 15:43:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:53 np0005532761 ceph-mgr[74869]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Nov 23 15:43:53 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 74 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=73/74 n=5 ec=57/44 lis/c=71/63 les/c/f=72/64/0 sis=73) [1] r=0 lpr=73 pi=[63,73)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:53 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 23 15:43:53 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:53 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.117881194 +0000 UTC m=+4.178684647 volume create 061249fe8fa0a268133110f30627250c47ba5f185535ab178b9b7170aaad1dc7
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.130160141 +0000 UTC m=+4.190963574 container create ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 systemd[1]: Started libpod-conmon-ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320.scope.
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.098537985 +0000 UTC m=+4.159341458 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 23 15:43:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5692645c5791ca4851afe52c1e2943df714d3db9cbab48b986547f07b27747e4/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.225235711 +0000 UTC m=+4.286039164 container init ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.233552734 +0000 UTC m=+4.294356187 container start ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.237321352 +0000 UTC m=+4.298124805 container attach ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 sad_leakey[100280]: 65534 65534
Nov 23 15:43:54 np0005532761 systemd[1]: libpod-ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320.scope: Deactivated successfully.
Nov 23 15:43:54 np0005532761 conmon[100280]: conmon ba1286d634b33ba38ab8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320.scope/container/memory.events
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.240069803 +0000 UTC m=+4.300873256 container died ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5692645c5791ca4851afe52c1e2943df714d3db9cbab48b986547f07b27747e4-merged.mount: Deactivated successfully.
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.293226003 +0000 UTC m=+4.354029466 container remove ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 podman[99876]: 2025-11-23 20:43:54.296908968 +0000 UTC m=+4.357712441 volume remove 061249fe8fa0a268133110f30627250c47ba5f185535ab178b9b7170aaad1dc7
Nov 23 15:43:54 np0005532761 systemd[1]: libpod-conmon-ba1286d634b33ba38ab8ea91336a3b38154d8ab7406e662b3e61c4d9361af320.scope: Deactivated successfully.
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.367331663 +0000 UTC m=+0.041481841 volume create 7706ef187d41fb3c3da757b99ac70cdaee1b08d7ecc18396e2d5723469dc9792
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.376099628 +0000 UTC m=+0.050249806 container create c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_ride, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 systemd[1]: Started libpod-conmon-c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda.scope.
Nov 23 15:43:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:43:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515cd07ce0e4ef210348d3b2dc2591e1dc7a2196321ef44bfd1914408123035c/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.350268333 +0000 UTC m=+0.024418531 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.447087118 +0000 UTC m=+0.121237316 container init c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_ride, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.452891568 +0000 UTC m=+0.127041746 container start c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_ride, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 priceless_ride[100313]: 65534 65534
Nov 23 15:43:54 np0005532761 systemd[1]: libpod-c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda.scope: Deactivated successfully.
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.45723374 +0000 UTC m=+0.131383948 container attach c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_ride, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.457662411 +0000 UTC m=+0.131812579 container died c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_ride, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-515cd07ce0e4ef210348d3b2dc2591e1dc7a2196321ef44bfd1914408123035c-merged.mount: Deactivated successfully.
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.500357871 +0000 UTC m=+0.174508089 container remove c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_ride, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:54 np0005532761 podman[100296]: 2025-11-23 20:43:54.506391166 +0000 UTC m=+0.180541384 volume remove 7706ef187d41fb3c3da757b99ac70cdaee1b08d7ecc18396e2d5723469dc9792
Nov 23 15:43:54 np0005532761 systemd[1]: libpod-conmon-c3c0171ecf31845f5da8168d7a2516e8899436bd3f3edc23b40893536fd8acda.scope: Deactivated successfully.
Nov 23 15:43:54 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:54 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Nov 23 15:43:54 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Nov 23 15:43:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:54 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:54 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:54 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:43:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:54.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:43:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:54 np0005532761 systemd[1]: Reloading.
Nov 23 15:43:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:54.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 4 remapped+peering, 4 active+remapped, 329 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 5 objects/s recovering
Nov 23 15:43:54 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:54 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:54 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:55 np0005532761 systemd[1]: Starting Ceph prometheus.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:55 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:43:55 np0005532761 podman[100457]: 2025-11-23 20:43:55.450485279 +0000 UTC m=+0.043701348 container create 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:55 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:43:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bce5a8fa7c17fb6e44d7fb77609542c659020a829e31f10506a0ff8de479de/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bce5a8fa7c17fb6e44d7fb77609542c659020a829e31f10506a0ff8de479de/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:55 np0005532761 podman[100457]: 2025-11-23 20:43:55.510692981 +0000 UTC m=+0.103909060 container init 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:55 np0005532761 podman[100457]: 2025-11-23 20:43:55.515046563 +0000 UTC m=+0.108262632 container start 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:43:55 np0005532761 bash[100457]: 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6
Nov 23 15:43:55 np0005532761 podman[100457]: 2025-11-23 20:43:55.43153943 +0000 UTC m=+0.024755489 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 23 15:43:55 np0005532761 systemd[1]: Started Ceph prometheus.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:55 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Nov 23 15:43:55 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.586Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.587Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.587Z caller=main.go:623 level=info host_details="(Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 x86_64 compute-0 (none))"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.587Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.587Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.594Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.595Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.602Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.602Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.606Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.606Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=4.12µs
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.606Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.606Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.606Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=42.141µs wal_replay_duration=439.771µs wbl_replay_duration=230ns total_replay_duration=512.163µs
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.609Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.609Z caller=main.go:1153 level=info msg="TSDB started"
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.609Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:55 np0005532761 ceph-mgr[74869]: [progress INFO root] complete: finished ev a62f882d-244a-4376-98e9-666dc78c294b (Updating prometheus deployment (+1 -> 1))
Nov 23 15:43:55 np0005532761 ceph-mgr[74869]: [progress INFO root] Completed event a62f882d-244a-4376-98e9-666dc78c294b (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Nov 23 15:43:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.642Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=32.502138ms db_storage=1.611µs remote_storage=1.96µs web_handler=1.42µs query_engine=1.64µs scrape=3.406277ms scrape_sd=312.129µs notify=27.211µs notify_sd=652.606µs rules=27.294944ms tracing=11.7µs
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.642Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0[100472]: ts=2025-11-23T20:43:55.642Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Nov 23 15:43:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:55 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:56 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:56 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:56 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:56 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 23 15:43:56 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 23 15:43:56 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 23 15:43:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:56 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:56.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 23 15:43:56 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.oyehye(active, since 103s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:43:56 np0005532761 systemd[1]: session-35.scope: Deactivated successfully.
Nov 23 15:43:56 np0005532761 systemd[1]: session-35.scope: Consumed 46.014s CPU time.
Nov 23 15:43:56 np0005532761 systemd-logind[820]: Session 35 logged out. Waiting for processes to exit.
Nov 23 15:43:56 np0005532761 systemd-logind[820]: Removed session 35.
Nov 23 15:43:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setuser ceph since I am not root
Nov 23 15:43:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ignoring --setgroup ceph since I am not root
Nov 23 15:43:56 np0005532761 ceph-mgr[74869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:43:56 np0005532761 ceph-mgr[74869]: pidfile_write: ignore empty --pid-file
Nov 23 15:43:56 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'alerts'
Nov 23 15:43:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:43:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:56.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:43:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:56.948+0000 7f0999035140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:43:56 np0005532761 ceph-mgr[74869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:43:56 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'balancer'
Nov 23 15:43:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:56 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:57 np0005532761 ceph-mon[74569]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 23 15:43:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:57.030+0000 7f0999035140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532761 ceph-mgr[74869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'cephadm'
Nov 23 15:43:57 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 23 15:43:57 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 23 15:43:57 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'crash'
Nov 23 15:43:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:57.808+0000 7f0999035140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532761 ceph-mgr[74869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'dashboard'
Nov 23 15:43:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:57 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:58.450+0000 7f0999035140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]:  from numpy import show_config as show_numpy_config
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:58.617+0000 7f0999035140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'influx'
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:58 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:58.687+0000 7f0999035140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'insights'
Nov 23 15:43:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:58.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'iostat'
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:58.838+0000 7f0999035140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:43:58 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 23 15:43:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:43:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:58.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:58 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe500016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:59 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'localpool'
Nov 23 15:43:59 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:43:59 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'mirroring'
Nov 23 15:43:59 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'nfs'
Nov 23 15:43:59 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 23 15:43:59 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 23 15:43:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:43:59 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:43:59.860+0000 7f0999035140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:43:59 np0005532761 ceph-mgr[74869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:43:59 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:00.073+0000 7f0999035140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:00.145+0000 7f0999035140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'osd_support'
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:00.216+0000 7f0999035140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:00.308+0000 7f0999035140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'progress'
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:00.378+0000 7f0999035140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'prometheus'
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:00 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:00 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 23 15:44:00 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 23 15:44:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:00.738+0000 7f0999035140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:00.841+0000 7f0999035140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'restful'
Nov 23 15:44:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:00 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rgw'
Nov 23 15:44:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:01.279+0000 7f0999035140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532761 ceph-mgr[74869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'rook'
Nov 23 15:44:01 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Nov 23 15:44:01 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Nov 23 15:44:01 np0005532761 systemd[1]: session-37.scope: Deactivated successfully.
Nov 23 15:44:01 np0005532761 systemd[1]: session-37.scope: Consumed 8.241s CPU time.
Nov 23 15:44:01 np0005532761 systemd-logind[820]: Session 37 logged out. Waiting for processes to exit.
Nov 23 15:44:01 np0005532761 systemd-logind[820]: Removed session 37.
Nov 23 15:44:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:01 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe500016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:01.843+0000 7f0999035140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532761 ceph-mgr[74869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'selftest'
Nov 23 15:44:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:01.916+0000 7f0999035140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532761 ceph-mgr[74869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:02.003+0000 7f0999035140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'stats'
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'status'
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:02.151+0000 7f0999035140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telegraf'
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:02.225+0000 7f0999035140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'telemetry'
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:02.395+0000 7f0999035140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:44:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz restarted
Nov 23 15:44:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.jtkauz started
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:02 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:02.624+0000 7f0999035140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'volumes'
Nov 23 15:44:02 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 23 15:44:02 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 23 15:44:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.oyehye(active, since 109s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 15:44:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp restarted
Nov 23 15:44:02 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.kgyerp started
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:02.936+0000 7f0999035140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532761 ceph-mgr[74869]: mgr[py] Loading python module 'zabbix'
Nov 23 15:44:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:02 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:03.008+0000 7f0999035140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Active manager daemon compute-0.oyehye restarted
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.oyehye
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: ms_deliver_dispatch: unhandled message 0x56020e63d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map Activating!
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.oyehye(active, starting, since 0.0253026s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr handle_mgr_map I am now activating
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.jcbopz"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jcbopz"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e10 all = 0
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.utubtn"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.utubtn"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e10 all = 0
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.gmfhnm"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gmfhnm"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e10 all = 0
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).mds e10 all = 1
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: balancer
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Starting
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Manager daemon compute-0.oyehye is now available
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:44:03
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: cephadm
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: crash
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: dashboard
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO sso] Loading SSO DB version=1
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: devicehealth
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Starting
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: iostat
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: nfs
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: orchestrator
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: pg_autoscaler
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: progress
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [progress INFO root] Loading...
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f0917b38820>, <progress.module.GhostEvent object at 0x7f0917b38a60>, <progress.module.GhostEvent object at 0x7f0917b38a90>, <progress.module.GhostEvent object at 0x7f0917b38ac0>, <progress.module.GhostEvent object at 0x7f0917b38af0>, <progress.module.GhostEvent object at 0x7f0917b38b20>, <progress.module.GhostEvent object at 0x7f0917b38b50>, <progress.module.GhostEvent object at 0x7f0917b38b80>, <progress.module.GhostEvent object at 0x7f0917b38bb0>, <progress.module.GhostEvent object at 0x7f0917b38be0>, <progress.module.GhostEvent object at 0x7f0917b38c10>, <progress.module.GhostEvent object at 0x7f0917b38c40>, <progress.module.GhostEvent object at 0x7f0917b38c70>, <progress.module.GhostEvent object at 0x7f0917b38ca0>, <progress.module.GhostEvent object at 0x7f0917b38cd0>, <progress.module.GhostEvent object at 0x7f0917b38d00>, <progress.module.GhostEvent object at 0x7f0917b38d30>, <progress.module.GhostEvent object at 0x7f0917b38d60>, <progress.module.GhostEvent object at 0x7f0917b38d90>, <progress.module.GhostEvent object at 0x7f0917b38dc0>, <progress.module.GhostEvent object at 0x7f0917b38df0>, <progress.module.GhostEvent object at 0x7f0917b38e20>, <progress.module.GhostEvent object at 0x7f0917b38e50>, <progress.module.GhostEvent object at 0x7f0917b38e80>, <progress.module.GhostEvent object at 0x7f0917b38eb0>, <progress.module.GhostEvent object at 0x7f0917b38ee0>, <progress.module.GhostEvent object at 0x7f0917b38f10>, <progress.module.GhostEvent object at 0x7f0917b38f40>, <progress.module.GhostEvent object at 0x7f0917b38f70>] historic events
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [progress INFO root] Loaded OSDMap, ready.
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: prometheus
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO root] Cache enabled
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO root] starting metric collection thread
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO root] Starting engine...
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:03] ENGINE Bus STARTING
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:03] ENGINE Bus STARTING
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: CherryPy Checker:
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: The Application mounted at '' has an empty config.
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] recovery thread starting
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] starting setup
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: rbd_support
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: restful
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: status
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [restful INFO root] server_addr: :: server_port: 8003
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: telemetry
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [restful WARNING root] server not running: no certificate configured
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] PerfHandler: starting
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: mgr load Constructed class from module: volumes
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:03.201+0000 7f09047c7640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:03.202+0000 7f08fe7bb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:03.202+0000 7f08fe7bb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:03.202+0000 7f08fe7bb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:03.202+0000 7f08fe7bb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T20:44:03.202+0000 7f08fe7bb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: client.0 error registering admin socket command: (17) File exists
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TaskHandler: starting
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"} v 0)
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] setup complete
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:03] ENGINE Serving on http://:::9283
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:03] ENGINE Serving on http://:::9283
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:03] ENGINE Bus STARTED
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:03] ENGINE Bus STARTED
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [prometheus INFO root] Engine started.
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 23 15:44:03 np0005532761 systemd-logind[820]: New session 38 of user ceph-admin.
Nov 23 15:44:03 np0005532761 systemd[1]: Started Session 38 of User ceph-admin.
Nov 23 15:44:03 np0005532761 ceph-mgr[74869]: [dashboard INFO dashboard.module] Engine started.
Nov 23 15:44:03 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Nov 23 15:44:03 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: Activating manager daemon compute-0.oyehye
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: Manager daemon compute-0.oyehye is now available
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:03 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204403 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:44:04 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.oyehye(active, since 1.07219s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:04 np0005532761 podman[100866]: 2025-11-23 20:44:04.261082671 +0000 UTC m=+0.072873379 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:44:04 np0005532761 podman[100866]: 2025-11-23 20:44:04.352277681 +0000 UTC m=+0.164068379 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 15:44:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:04 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:04 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.1c deep-scrub starts
Nov 23 15:44:04 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.1c deep-scrub ok
Nov 23 15:44:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:04.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:44:04] ENGINE Bus STARTING
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:44:04] ENGINE Bus STARTING
Nov 23 15:44:04 np0005532761 podman[100988]: 2025-11-23 20:44:04.784017569 +0000 UTC m=+0.057653457 container exec c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:04 np0005532761 podman[100988]: 2025-11-23 20:44:04.822608903 +0000 UTC m=+0.096244781 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:44:04] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:44:04] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:44:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:44:04] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:44:04] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:44:04] ENGINE Bus STARTED
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:44:04] ENGINE Bus STARTED
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: [cephadm INFO cherrypy.error] [23/Nov/2025:20:44:04] ENGINE Client ('192.168.122.100', 33786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:44:04 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : [23/Nov/2025:20:44:04] ENGINE Client ('192.168.122.100', 33786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:44:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:04 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:44:04] ENGINE Bus STARTING
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:44:04] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.oyehye(active, since 2s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:44:05 np0005532761 podman[101100]: 2025-11-23 20:44:05.141048241 +0000 UTC m=+0.065412347 container exec 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=10.001870155s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 active pruub 219.176284790s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=10.001718521s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 219.176284790s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=10.001319885s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 active pruub 219.176208496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=10.001119614s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 219.176208496s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=9.997642517s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 active pruub 219.173156738s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=9.997603416s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 219.173156738s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=9.997304916s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 active pruub 219.173156738s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76 pruub=9.997233391s) [0] r=-1 lpr=76 pi=[65,76)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 219.173156738s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76) [1] r=0 lpr=76 pi=[62,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 76 pg[6.6( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76) [1] r=0 lpr=76 pi=[62,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:05 np0005532761 podman[101121]: 2025-11-23 20:44:05.214033972 +0000 UTC m=+0.056086857 container exec_died 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:05 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 15:44:05 np0005532761 podman[101100]: 2025-11-23 20:44:05.2190169 +0000 UTC m=+0.143380976 container exec_died 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:05 np0005532761 podman[101176]: 2025-11-23 20:44:05.399984714 +0000 UTC m=+0.044638511 container exec cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:44:05 np0005532761 podman[101176]: 2025-11-23 20:44:05.405142336 +0000 UTC m=+0.049796133 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:44:05 np0005532761 podman[101242]: 2025-11-23 20:44:05.616782112 +0000 UTC m=+0.065046408 container exec 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 23 15:44:05 np0005532761 podman[101242]: 2025-11-23 20:44:05.627226231 +0000 UTC m=+0.075490527 container exec_died 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 23 15:44:05 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:44:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:05 np0005532761 podman[101306]: 2025-11-23 20:44:05.809304593 +0000 UTC m=+0.048804109 container exec 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:05 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:05 np0005532761 podman[101306]: 2025-11-23 20:44:05.837202052 +0000 UTC m=+0.076701578 container exec_died 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:06 np0005532761 podman[101378]: 2025-11-23 20:44:06.034027745 +0000 UTC m=+0.051388185 container exec 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:44:04] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:44:04] ENGINE Bus STARTED
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: [23/Nov/2025:20:44:04] ENGINE Client ('192.168.122.100', 33786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[6.6( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=76/77 n=2 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76) [1] r=0 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 77 pg[6.e( v 49'39 lc 48'19 (0'0,49'39] local-lis/les=76/77 n=1 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76) [1] r=0 lpr=76 pi=[62,76)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:06 np0005532761 podman[101378]: 2025-11-23 20:44:06.200163207 +0000 UTC m=+0.217523607 container exec_died 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:06 np0005532761 podman[101489]: 2025-11-23 20:44:06.518681696 +0000 UTC m=+0.047664409 container exec 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:06 np0005532761 podman[101489]: 2025-11-23 20:44:06.554107119 +0000 UTC m=+0.083089832 container exec_died 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:06 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 23 15:44:06 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 23 15:44:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 23 15:44:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:06.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:44:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:06 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v7: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 23 15:44:07 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 78 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=77/78 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] async=[0] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:07 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 78 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=77/78 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] async=[0] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:07 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 78 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=77/78 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] async=[0] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:07 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 78 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=77/78 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] async=[0] r=0 lpr=77 pi=[65,77)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:07 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 23 15:44:07 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 23 15:44:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:07] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Nov 23 15:44:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:07] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Nov 23 15:44:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:07 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:07 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e34: compute-0.oyehye(active, since 4s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=77/78 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.996752739s) [0] async=[0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 50'991 active pruub 227.211975098s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=77/78 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.996690750s) [0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 227.211975098s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=77/78 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.996377945s) [0] async=[0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 50'991 active pruub 227.212020874s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=77/78 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.996331215s) [0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 227.212020874s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=77/78 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.995923042s) [0] async=[0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 50'991 active pruub 227.212005615s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=77/78 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.995891571s) [0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 227.212005615s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=77/78 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.995289803s) [0] async=[0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 50'991 active pruub 227.211944580s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 79 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=77/78 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79 pruub=14.995198250s) [0] r=-1 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 227.211944580s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:44:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:08 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 23 15:44:08 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 23 15:44:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:08 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:08.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:08 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v10: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 80 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80) [1] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 80 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80) [1] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 80 pg[6.8( v 49'39 (0'0,49'39] local-lis/les=53/54 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=80 pruub=11.607556343s) [0] r=-1 lpr=80 pi=[53,80)/1 crt=49'39 lcod 0'0 mlcod 0'0 active pruub 224.841659546s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 80 pg[6.8( v 49'39 (0'0,49'39] local-lis/les=53/54 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=80 pruub=11.607529640s) [0] r=-1 lpr=80 pi=[53,80)/1 crt=49'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.841659546s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 23 15:44:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 23 15:44:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:09 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe58003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 81 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 81 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 81 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 81 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=-1 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:09 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Nov 23 15:44:10 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Nov 23 15:44:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:10 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:10.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 23 15:44:10 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 23 15:44:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:10 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v14: 337 pgs: 4 remapped+peering, 4 peering, 329 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 15 op/s; 56 B/s, 5 objects/s recovering
Nov 23 15:44:11 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 23 15:44:11 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:11 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 23 15:44:11 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 83 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83) [1] r=0 lpr=83 pi=[57,83)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:11 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 83 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83) [1] r=0 lpr=83 pi=[57,83)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:11 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 83 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83) [1] r=0 lpr=83 pi=[57,83)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:11 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 83 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83) [1] r=0 lpr=83 pi=[57,83)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:11 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 23 15:44:12 np0005532761 podman[102670]: 2025-11-23 20:44:12.367528426 +0000 UTC m=+0.050928603 container create bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:12 np0005532761 systemd[1]: Started libpod-conmon-bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0.scope.
Nov 23 15:44:12 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:12 np0005532761 podman[102670]: 2025-11-23 20:44:12.343674541 +0000 UTC m=+0.027074798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:12 np0005532761 podman[102670]: 2025-11-23 20:44:12.447725463 +0000 UTC m=+0.131125660 container init bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:12 np0005532761 podman[102670]: 2025-11-23 20:44:12.45614952 +0000 UTC m=+0.139549687 container start bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:44:12 np0005532761 youthful_carver[102687]: 167 167
Nov 23 15:44:12 np0005532761 podman[102670]: 2025-11-23 20:44:12.460737729 +0000 UTC m=+0.144137936 container attach bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:12 np0005532761 systemd[1]: libpod-bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0.scope: Deactivated successfully.
Nov 23 15:44:12 np0005532761 podman[102670]: 2025-11-23 20:44:12.461303243 +0000 UTC m=+0.144703410 container died bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 15:44:12 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a9991f1234f30b68d02dcdae68fae08f4b945fd0e6d196f098c2049c6c38ecdc-merged.mount: Deactivated successfully.
Nov 23 15:44:12 np0005532761 podman[102670]: 2025-11-23 20:44:12.492883567 +0000 UTC m=+0.176283744 container remove bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:12 np0005532761 systemd[1]: libpod-conmon-bcc935fabbd2cdce6d0e78c340eab08524395c7396cdc2f39fccc7f2951b16e0.scope: Deactivated successfully.
Nov 23 15:44:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:12 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:12 np0005532761 podman[102711]: 2025-11-23 20:44:12.631828568 +0000 UTC m=+0.039981212 container create 7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:44:12 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Nov 23 15:44:12 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Nov 23 15:44:12 np0005532761 systemd[1]: Started libpod-conmon-7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64.scope.
Nov 23 15:44:12 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd85500d3f1ea11a7b28f5a7a073a36fc77c7c3607fee0736b9cee80f3fe64c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd85500d3f1ea11a7b28f5a7a073a36fc77c7c3607fee0736b9cee80f3fe64c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd85500d3f1ea11a7b28f5a7a073a36fc77c7c3607fee0736b9cee80f3fe64c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd85500d3f1ea11a7b28f5a7a073a36fc77c7c3607fee0736b9cee80f3fe64c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd85500d3f1ea11a7b28f5a7a073a36fc77c7c3607fee0736b9cee80f3fe64c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:12 np0005532761 podman[102711]: 2025-11-23 20:44:12.70020163 +0000 UTC m=+0.108354304 container init 7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:12 np0005532761 podman[102711]: 2025-11-23 20:44:12.613910576 +0000 UTC m=+0.022063260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:12 np0005532761 podman[102711]: 2025-11-23 20:44:12.715080633 +0000 UTC m=+0.123233277 container start 7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:12 np0005532761 podman[102711]: 2025-11-23 20:44:12.720201186 +0000 UTC m=+0.128353870 container attach 7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 15:44:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:12.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:12 np0005532761 ceph-mon[74569]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 23 15:44:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 23 15:44:12 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 23 15:44:12 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 84 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=83/84 n=5 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83) [1] r=0 lpr=83 pi=[57,83)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:12 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 84 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=83/84 n=6 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83) [1] r=0 lpr=83 pi=[57,83)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:12 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v17: 337 pgs: 4 remapped+peering, 4 peering, 329 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:13 np0005532761 gallant_hawking[102728]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:44:13 np0005532761 gallant_hawking[102728]: --> All data devices are unavailable
Nov 23 15:44:13 np0005532761 systemd[1]: libpod-7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64.scope: Deactivated successfully.
Nov 23 15:44:13 np0005532761 podman[102711]: 2025-11-23 20:44:13.094392129 +0000 UTC m=+0.502544773 container died 7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:13 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8bd85500d3f1ea11a7b28f5a7a073a36fc77c7c3607fee0736b9cee80f3fe64c-merged.mount: Deactivated successfully.
Nov 23 15:44:13 np0005532761 podman[102711]: 2025-11-23 20:44:13.131729852 +0000 UTC m=+0.539882496 container remove 7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:13 np0005532761 systemd[1]: libpod-conmon-7f53a4fe8c80729629b848cba64efff3d9fe8e0cb960a5da80ce7cbd4e757c64.scope: Deactivated successfully.
Nov 23 15:44:13 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.a scrub starts
Nov 23 15:44:13 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.a scrub ok
Nov 23 15:44:13 np0005532761 podman[102846]: 2025-11-23 20:44:13.697462752 +0000 UTC m=+0.045019541 container create 918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:44:13 np0005532761 systemd[91608]: Starting Mark boot as successful...
Nov 23 15:44:13 np0005532761 systemd[91608]: Finished Mark boot as successful.
Nov 23 15:44:13 np0005532761 systemd[1]: Started libpod-conmon-918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b.scope.
Nov 23 15:44:13 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:13 np0005532761 podman[102846]: 2025-11-23 20:44:13.680819453 +0000 UTC m=+0.028376262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:13 np0005532761 podman[102846]: 2025-11-23 20:44:13.779383213 +0000 UTC m=+0.126940042 container init 918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 15:44:13 np0005532761 podman[102846]: 2025-11-23 20:44:13.787167114 +0000 UTC m=+0.134723943 container start 918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:13 np0005532761 podman[102846]: 2025-11-23 20:44:13.79086397 +0000 UTC m=+0.138420969 container attach 918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 15:44:13 np0005532761 sharp_margulis[102863]: 167 167
Nov 23 15:44:13 np0005532761 systemd[1]: libpod-918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b.scope: Deactivated successfully.
Nov 23 15:44:13 np0005532761 podman[102846]: 2025-11-23 20:44:13.795032477 +0000 UTC m=+0.142589256 container died 918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:44:13 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d41995105ba3e8d9d151b77f8e893ef58fa6ae225fd156512121829eefce2cd7-merged.mount: Deactivated successfully.
Nov 23 15:44:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:13 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:13 np0005532761 podman[102846]: 2025-11-23 20:44:13.843240619 +0000 UTC m=+0.190797438 container remove 918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:13 np0005532761 systemd[1]: libpod-conmon-918d254e969b2cba3374dcbbddf87f8a37cde29367e004d4ae35e12288ab306b.scope: Deactivated successfully.
Nov 23 15:44:14 np0005532761 podman[102889]: 2025-11-23 20:44:14.004798803 +0000 UTC m=+0.039579011 container create 6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 15:44:14 np0005532761 systemd[1]: Started libpod-conmon-6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745.scope.
Nov 23 15:44:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066b70a9b9bb7b90360ac1d352af2f2e2bec26214d6adcc7ff40c4ea89315d8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066b70a9b9bb7b90360ac1d352af2f2e2bec26214d6adcc7ff40c4ea89315d8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066b70a9b9bb7b90360ac1d352af2f2e2bec26214d6adcc7ff40c4ea89315d8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066b70a9b9bb7b90360ac1d352af2f2e2bec26214d6adcc7ff40c4ea89315d8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:14 np0005532761 podman[102889]: 2025-11-23 20:44:13.987930988 +0000 UTC m=+0.022711216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:14 np0005532761 podman[102889]: 2025-11-23 20:44:14.083033269 +0000 UTC m=+0.117813577 container init 6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:14 np0005532761 podman[102889]: 2025-11-23 20:44:14.093477499 +0000 UTC m=+0.128257707 container start 6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:14 np0005532761 podman[102889]: 2025-11-23 20:44:14.097682127 +0000 UTC m=+0.132462425 container attach 6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]: {
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:    "1": [
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:        {
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "devices": [
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "/dev/loop3"
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            ],
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "lv_name": "ceph_lv0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "lv_size": "21470642176",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "name": "ceph_lv0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "tags": {
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.cluster_name": "ceph",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.crush_device_class": "",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.encrypted": "0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.osd_id": "1",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.type": "block",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.vdo": "0",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:                "ceph.with_tpm": "0"
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            },
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "type": "block",
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:            "vg_name": "ceph_vg0"
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:        }
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]:    ]
Nov 23 15:44:14 np0005532761 naughty_mccarthy[102905]: }
Nov 23 15:44:14 np0005532761 systemd[1]: libpod-6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745.scope: Deactivated successfully.
Nov 23 15:44:14 np0005532761 podman[102889]: 2025-11-23 20:44:14.384948211 +0000 UTC m=+0.419728419 container died 6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:14 np0005532761 systemd[1]: var-lib-containers-storage-overlay-066b70a9b9bb7b90360ac1d352af2f2e2bec26214d6adcc7ff40c4ea89315d8f-merged.mount: Deactivated successfully.
Nov 23 15:44:14 np0005532761 podman[102889]: 2025-11-23 20:44:14.423866564 +0000 UTC m=+0.458646772 container remove 6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mccarthy, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:44:14 np0005532761 systemd[1]: libpod-conmon-6bd33ebb571f15ae59e7eb71d2a07815132d3ad5211a5637d1f42c32a91c6745.scope: Deactivated successfully.
Nov 23 15:44:14 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.c scrub starts
Nov 23 15:44:14 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.c scrub ok
Nov 23 15:44:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:14 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:14 np0005532761 podman[103018]: 2025-11-23 20:44:14.929029243 +0000 UTC m=+0.036382769 container create 0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_gauss, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:14.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:14 np0005532761 systemd[1]: Started libpod-conmon-0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09.scope.
Nov 23 15:44:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:14 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:15 np0005532761 podman[103018]: 2025-11-23 20:44:15.009886977 +0000 UTC m=+0.117240543 container init 0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:44:15 np0005532761 podman[103018]: 2025-11-23 20:44:14.914482618 +0000 UTC m=+0.021836164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:15 np0005532761 podman[103018]: 2025-11-23 20:44:15.016887537 +0000 UTC m=+0.124241063 container start 0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:15 np0005532761 podman[103018]: 2025-11-23 20:44:15.020090619 +0000 UTC m=+0.127444195 container attach 0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_gauss, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:15 np0005532761 tender_gauss[103034]: 167 167
Nov 23 15:44:15 np0005532761 systemd[1]: libpod-0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09.scope: Deactivated successfully.
Nov 23 15:44:15 np0005532761 podman[103018]: 2025-11-23 20:44:15.021997069 +0000 UTC m=+0.129350595 container died 0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4f123df6bd7c65d3a4f7df47d57e4de6950dae7397c850d15b0860381149dba3-merged.mount: Deactivated successfully.
Nov 23 15:44:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v18: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 5 objects/s recovering
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 15:44:15 np0005532761 podman[103018]: 2025-11-23 20:44:15.051416197 +0000 UTC m=+0.158769723 container remove 0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_gauss, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:15 np0005532761 systemd[1]: libpod-conmon-0b1cbb7cdae785a64eb5ff6420f2ce4c05aac15e30960c9937345cb460795b09.scope: Deactivated successfully.
Nov 23 15:44:15 np0005532761 podman[103058]: 2025-11-23 20:44:15.192793281 +0000 UTC m=+0.043147003 container create 5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:44:15 np0005532761 systemd[1]: Started libpod-conmon-5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c.scope.
Nov 23 15:44:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de44a42d0b54deee13bfdf77221fe90025b2dde0a52937755586f8a17eb9127/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:15 np0005532761 podman[103058]: 2025-11-23 20:44:15.17295061 +0000 UTC m=+0.023304352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de44a42d0b54deee13bfdf77221fe90025b2dde0a52937755586f8a17eb9127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de44a42d0b54deee13bfdf77221fe90025b2dde0a52937755586f8a17eb9127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de44a42d0b54deee13bfdf77221fe90025b2dde0a52937755586f8a17eb9127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:15 np0005532761 podman[103058]: 2025-11-23 20:44:15.289293958 +0000 UTC m=+0.139647690 container init 5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_moore, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:15 np0005532761 podman[103058]: 2025-11-23 20:44:15.306076111 +0000 UTC m=+0.156429833 container start 5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_moore, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:15 np0005532761 podman[103058]: 2025-11-23 20:44:15.309667633 +0000 UTC m=+0.160021345 container attach 5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:15 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 23 15:44:15 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 23 15:44:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:15 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe6c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 15:44:15 np0005532761 lvm[103151]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:44:15 np0005532761 lvm[103151]: VG ceph_vg0 finished
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 23 15:44:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 23 15:44:15 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 85 pg[6.9( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=85) [1] r=0 lpr=85 pi=[60,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:15 np0005532761 awesome_moore[103075]: {}
Nov 23 15:44:16 np0005532761 systemd[1]: libpod-5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c.scope: Deactivated successfully.
Nov 23 15:44:16 np0005532761 systemd[1]: libpod-5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c.scope: Consumed 1.082s CPU time.
Nov 23 15:44:16 np0005532761 podman[103058]: 2025-11-23 20:44:16.000730094 +0000 UTC m=+0.851083826 container died 5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_moore, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:44:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9de44a42d0b54deee13bfdf77221fe90025b2dde0a52937755586f8a17eb9127-merged.mount: Deactivated successfully.
Nov 23 15:44:16 np0005532761 podman[103058]: 2025-11-23 20:44:16.048259239 +0000 UTC m=+0.898612971 container remove 5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_moore, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:44:16 np0005532761 systemd[1]: libpod-conmon-5e64d16d703df675fc1f14c443bf3bfb83ae4e3e7d39f6aea3211d588201b66c.scope: Deactivated successfully.
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:16 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 15:44:16 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 15:44:16 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.b scrub starts
Nov 23 15:44:16 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.b scrub ok
Nov 23 15:44:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:16 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:16 np0005532761 podman[103283]: 2025-11-23 20:44:16.793710171 +0000 UTC m=+0.042254570 container create b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409 (image=quay.io/ceph/ceph:v19, name=kind_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 15:44:16 np0005532761 systemd[1]: Started libpod-conmon-b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409.scope.
Nov 23 15:44:16 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:16 np0005532761 podman[103283]: 2025-11-23 20:44:16.771619681 +0000 UTC m=+0.020164090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:44:16 np0005532761 podman[103283]: 2025-11-23 20:44:16.869136575 +0000 UTC m=+0.117680964 container init b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409 (image=quay.io/ceph/ceph:v19, name=kind_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:44:16 np0005532761 podman[103283]: 2025-11-23 20:44:16.876525495 +0000 UTC m=+0.125069874 container start b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409 (image=quay.io/ceph/ceph:v19, name=kind_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Nov 23 15:44:16 np0005532761 podman[103283]: 2025-11-23 20:44:16.879400579 +0000 UTC m=+0.127944978 container attach b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409 (image=quay.io/ceph/ceph:v19, name=kind_fermi, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:44:16 np0005532761 kind_fermi[103299]: 167 167
Nov 23 15:44:16 np0005532761 systemd[1]: libpod-b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409.scope: Deactivated successfully.
Nov 23 15:44:16 np0005532761 podman[103283]: 2025-11-23 20:44:16.884895721 +0000 UTC m=+0.133440100 container died b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409 (image=quay.io/ceph/ceph:v19, name=kind_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 15:44:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-bfe3b0c590d9e1733a0e898209425bbf9849c4cf8eb36c507954f26e406b99b3-merged.mount: Deactivated successfully.
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 23 15:44:16 np0005532761 podman[103283]: 2025-11-23 20:44:16.930081106 +0000 UTC m=+0.178625495 container remove b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409 (image=quay.io/ceph/ceph:v19, name=kind_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:44:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:16.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:16 np0005532761 systemd[1]: libpod-conmon-b4ea786ba0503df1a48fde96ee92c89739ed90884b740b8b32dc8265ef512409.scope: Deactivated successfully.
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 15:44:16 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 86 pg[6.9( v 49'39 (0'0,49'39] local-lis/les=85/86 n=0 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=85) [1] r=0 lpr=85 pi=[60,85)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:16 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.oyehye (monmap changed)...
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.oyehye (monmap changed)...
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v21: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 5 objects/s recovering
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 15:44:17 np0005532761 systemd-logind[820]: New session 39 of user zuul.
Nov 23 15:44:17 np0005532761 systemd[1]: Started Session 39 of User zuul.
Nov 23 15:44:17 np0005532761 podman[103385]: 2025-11-23 20:44:17.420214298 +0000 UTC m=+0.056540649 container create c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3 (image=quay.io/ceph/ceph:v19, name=laughing_sutherland, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 15:44:17 np0005532761 systemd[1]: Started libpod-conmon-c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3.scope.
Nov 23 15:44:17 np0005532761 podman[103385]: 2025-11-23 20:44:17.382437974 +0000 UTC m=+0.018764345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 23 15:44:17 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:17 np0005532761 podman[103385]: 2025-11-23 20:44:17.491780862 +0000 UTC m=+0.128107233 container init c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3 (image=quay.io/ceph/ceph:v19, name=laughing_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 15:44:17 np0005532761 podman[103385]: 2025-11-23 20:44:17.49948972 +0000 UTC m=+0.135816081 container start c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3 (image=quay.io/ceph/ceph:v19, name=laughing_sutherland, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:44:17 np0005532761 podman[103385]: 2025-11-23 20:44:17.502880438 +0000 UTC m=+0.139206809 container attach c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3 (image=quay.io/ceph/ceph:v19, name=laughing_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:17 np0005532761 laughing_sutherland[103449]: 167 167
Nov 23 15:44:17 np0005532761 systemd[1]: libpod-c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3.scope: Deactivated successfully.
Nov 23 15:44:17 np0005532761 podman[103385]: 2025-11-23 20:44:17.505469145 +0000 UTC m=+0.141795496 container died c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3 (image=quay.io/ceph/ceph:v19, name=laughing_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:17 np0005532761 systemd[1]: var-lib-containers-storage-overlay-fa5279f72500be664d7e256839806e1a0d5617ce8ae8e0339578b20e40cb22f0-merged.mount: Deactivated successfully.
Nov 23 15:44:17 np0005532761 podman[103385]: 2025-11-23 20:44:17.540891588 +0000 UTC m=+0.177217939 container remove c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3 (image=quay.io/ceph/ceph:v19, name=laughing_sutherland, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:17 np0005532761 systemd[1]: libpod-conmon-c74eb81421f1e7f2ba4e19a906f28aa1c29cab01fa2be93a80c791754003eeb3.scope: Deactivated successfully.
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:17 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:17 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Nov 23 15:44:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:17] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Nov 23 15:44:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:17] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Nov 23 15:44:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:17 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: Reconfiguring mgr.compute-0.oyehye (monmap changed)...
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 23 15:44:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 23 15:44:18 np0005532761 python3.9[103617]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 15:44:18 np0005532761 podman[103633]: 2025-11-23 20:44:18.026231536 +0000 UTC m=+0.045177315 container create 1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shamir, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:18 np0005532761 systemd[1]: Started libpod-conmon-1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833.scope.
Nov 23 15:44:18 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:18 np0005532761 podman[103633]: 2025-11-23 20:44:18.004964798 +0000 UTC m=+0.023910587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:18 np0005532761 podman[103633]: 2025-11-23 20:44:18.105641383 +0000 UTC m=+0.124587132 container init 1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shamir, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:18 np0005532761 podman[103633]: 2025-11-23 20:44:18.112354306 +0000 UTC m=+0.131300045 container start 1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shamir, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:18 np0005532761 podman[103633]: 2025-11-23 20:44:18.115795434 +0000 UTC m=+0.134741203 container attach 1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:44:18 np0005532761 vibrant_shamir[103664]: 167 167
Nov 23 15:44:18 np0005532761 systemd[1]: libpod-1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833.scope: Deactivated successfully.
Nov 23 15:44:18 np0005532761 podman[103633]: 2025-11-23 20:44:18.118038662 +0000 UTC m=+0.136984461 container died 1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:44:18 np0005532761 systemd[1]: var-lib-containers-storage-overlay-73117e5fe782645f269c953049e1669a45890c1de633fbc1fe3a9354c0a7feb6-merged.mount: Deactivated successfully.
Nov 23 15:44:18 np0005532761 podman[103633]: 2025-11-23 20:44:18.164730585 +0000 UTC m=+0.183676324 container remove 1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 15:44:18 np0005532761 systemd[1]: libpod-conmon-1e95b8a637043b234353f9b3ade52f1230f93c4312a8bde0dd1c23a861b65833.scope: Deactivated successfully.
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Nov 23 15:44:18 np0005532761 podman[103810]: 2025-11-23 20:44:18.615989226 +0000 UTC m=+0.036376588 container create d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_gagarin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:18 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 23 15:44:18 np0005532761 kernel: ganesha.nfsd[96936]: segfault at 50 ip 00007fbf3191632e sp 00007fbefa7fb210 error 4 in libntirpc.so.5.8[7fbf318fb000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 23 15:44:18 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:44:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[96888]: 23/11/2025 20:44:18 : epoch 692371d1 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbe60003f90 fd 48 proxy ignored for local
Nov 23 15:44:18 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 23 15:44:18 np0005532761 systemd[1]: Started libpod-conmon-d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97.scope.
Nov 23 15:44:18 np0005532761 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 23 15:44:18 np0005532761 systemd[1]: Started Process Core Dump (PID 103826/UID 0).
Nov 23 15:44:18 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:18 np0005532761 podman[103810]: 2025-11-23 20:44:18.680751835 +0000 UTC m=+0.101139197 container init d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_gagarin, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:44:18 np0005532761 podman[103810]: 2025-11-23 20:44:18.688314279 +0000 UTC m=+0.108701661 container start d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_gagarin, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:44:18 np0005532761 podman[103810]: 2025-11-23 20:44:18.692743954 +0000 UTC m=+0.113131316 container attach d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_gagarin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:18 np0005532761 intelligent_gagarin[103827]: 167 167
Nov 23 15:44:18 np0005532761 systemd[1]: libpod-d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97.scope: Deactivated successfully.
Nov 23 15:44:18 np0005532761 podman[103810]: 2025-11-23 20:44:18.696001978 +0000 UTC m=+0.116389340 container died d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_gagarin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:44:18 np0005532761 podman[103810]: 2025-11-23 20:44:18.599932592 +0000 UTC m=+0.020319984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:18 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a046207587d0e5a62d63896d50fe64baccdfb22bead084a3b871c9a91f5c997a-merged.mount: Deactivated successfully.
Nov 23 15:44:18 np0005532761 podman[103810]: 2025-11-23 20:44:18.726831363 +0000 UTC m=+0.147218715 container remove d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_gagarin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:18.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:18 np0005532761 systemd[1]: libpod-conmon-d4ca5042f1749854cc9dc49bc150c3623721d8f1588b48d6f1605bbd1a7a7e97.scope: Deactivated successfully.
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 23 15:44:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:18.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 23 15:44:18 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: Reconfiguring osd.1 (monmap changed)...
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: Reconfiguring daemon osd.1 on compute-0
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 23 15:44:19 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 87 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87 pruub=12.132784843s) [0] r=-1 lpr=87 pi=[65,87)/1 crt=50'991 mlcod 0'0 active pruub 235.176620483s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:19 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 88 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87 pruub=12.132740021s) [0] r=-1 lpr=87 pi=[65,87)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 235.176620483s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:19 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 87 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87 pruub=12.131047249s) [0] r=-1 lpr=87 pi=[65,87)/1 crt=50'991 mlcod 0'0 active pruub 235.176666260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:19 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 88 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87 pruub=12.130981445s) [0] r=-1 lpr=87 pi=[65,87)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 235.176666260s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v24: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 15:44:19 np0005532761 python3.9[103973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:44:19 np0005532761 podman[104014]: 2025-11-23 20:44:19.365511873 +0000 UTC m=+0.045921575 volume create 9187a99b1e034ad37565afd25e196e6b1821032df80d217098359f02c074574c
Nov 23 15:44:19 np0005532761 podman[104014]: 2025-11-23 20:44:19.382192133 +0000 UTC m=+0.062601825 container create 36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hopeful_babbage, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:19 np0005532761 systemd[1]: Started libpod-conmon-36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6.scope.
Nov 23 15:44:19 np0005532761 podman[104014]: 2025-11-23 20:44:19.347292673 +0000 UTC m=+0.027702405 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 23 15:44:19 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:19 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6896b45f76d59f35b6b35b66a1b27ffce86c21dde4ee06bb74eb40b94ba7c3d7/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:19 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Nov 23 15:44:19 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 23 15:44:20 np0005532761 podman[104014]: 2025-11-23 20:44:20.205924272 +0000 UTC m=+0.886333994 container init 36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hopeful_babbage, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:20 np0005532761 podman[104014]: 2025-11-23 20:44:20.213826846 +0000 UTC m=+0.894236558 container start 36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hopeful_babbage, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:20 np0005532761 hopeful_babbage[104034]: 65534 65534
Nov 23 15:44:20 np0005532761 systemd[1]: libpod-36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6.scope: Deactivated successfully.
Nov 23 15:44:20 np0005532761 conmon[104034]: conmon 36ca02115d1134e08de6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6.scope/container/memory.events
Nov 23 15:44:20 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 23 15:44:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:20 np0005532761 python3.9[104203]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:44:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v25: 337 pgs: 2 unknown, 2 peering, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Nov 23 15:44:21 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 23 15:44:21 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 23 15:44:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204422 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:44:22 np0005532761 python3.9[104359]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:44:22 np0005532761 podman[104014]: 2025-11-23 20:44:22.411356533 +0000 UTC m=+3.091766275 container attach 36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hopeful_babbage, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 podman[104014]: 2025-11-23 20:44:22.412740078 +0000 UTC m=+3.093149760 container died 36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hopeful_babbage, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 23 15:44:22 np0005532761 systemd-coredump[103829]: Process 96892 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 46:#012#0  0x00007fbf3191632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 15:44:22 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 89 pg[6.b( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=89) [1] r=0 lpr=89 pi=[67,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 89 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=0 lpr=89 pi=[65,89)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 89 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=0 lpr=89 pi=[65,89)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 89 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=0 lpr=89 pi=[65,89)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 89 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=65/66 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=0 lpr=89 pi=[65,89)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-6896b45f76d59f35b6b35b66a1b27ffce86c21dde4ee06bb74eb40b94ba7c3d7-merged.mount: Deactivated successfully.
Nov 23 15:44:22 np0005532761 podman[104014]: 2025-11-23 20:44:22.491727075 +0000 UTC m=+3.172136757 container remove 36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=hopeful_babbage, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 podman[104014]: 2025-11-23 20:44:22.496640851 +0000 UTC m=+3.177050543 volume remove 9187a99b1e034ad37565afd25e196e6b1821032df80d217098359f02c074574c
Nov 23 15:44:22 np0005532761 systemd[1]: libpod-conmon-36ca02115d1134e08de6afbb7bf5805b508969022160384ea70659e48f7f63f6.scope: Deactivated successfully.
Nov 23 15:44:22 np0005532761 systemd[1]: systemd-coredump@0-103826-0.service: Deactivated successfully.
Nov 23 15:44:22 np0005532761 systemd[1]: systemd-coredump@0-103826-0.service: Consumed 1.136s CPU time.
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.570068064 +0000 UTC m=+0.057458832 volume create 125623e2a7a6dd9969a00e9e6267517b49992a7ca9d0fb97ff9807ee234a6770
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.579390493 +0000 UTC m=+0.066781261 container create bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_ritchie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 23 15:44:22 np0005532761 podman[104404]: 2025-11-23 20:44:22.59360243 +0000 UTC m=+0.035439135 container died 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 15:44:22 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 23 15:44:22 np0005532761 systemd[1]: Started libpod-conmon-bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad.scope.
Nov 23 15:44:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f4e964154f11149fdb6e3d9d01fa9ea9fef089c0a8b3facb87f2a861a4c64117-merged.mount: Deactivated successfully.
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.536712054 +0000 UTC m=+0.024102872 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 23 15:44:22 np0005532761 podman[104404]: 2025-11-23 20:44:22.637322917 +0000 UTC m=+0.079159632 container remove 8cbaf02f3d72ab8166ed8de2300ed5a45cf0b6dbb3c6f00229511e3bc93c68f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:22 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:44:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdde8a3f8e5954185e30044b30cc32faee4f68548dfe4bc98a6ac16785c6083f/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.664347353 +0000 UTC m=+0.151738141 container init bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_ritchie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.675186643 +0000 UTC m=+0.162577411 container start bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_ritchie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 strange_ritchie[104420]: 65534 65534
Nov 23 15:44:22 np0005532761 systemd[1]: libpod-bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad.scope: Deactivated successfully.
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.678890408 +0000 UTC m=+0.166281226 container attach bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_ritchie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.681416073 +0000 UTC m=+0.168806851 container died bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_ritchie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-bdde8a3f8e5954185e30044b30cc32faee4f68548dfe4bc98a6ac16785c6083f-merged.mount: Deactivated successfully.
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.722190884 +0000 UTC m=+0.209581652 container remove bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_ritchie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 podman[104389]: 2025-11-23 20:44:22.726272489 +0000 UTC m=+0.213663247 volume remove 125623e2a7a6dd9969a00e9e6267517b49992a7ca9d0fb97ff9807ee234a6770
Nov 23 15:44:22 np0005532761 systemd[1]: libpod-conmon-bf5d7a430207fcb1ad0fea455f6da7898a92e4868b68464ade6b93ede435f0ad.scope: Deactivated successfully.
Nov 23 15:44:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:22 np0005532761 systemd[1]: Stopping Ceph alertmanager.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:44:22 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:44:22 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.544s CPU time.
Nov 23 15:44:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[98437]: ts=2025-11-23T20:44:22.905Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Nov 23 15:44:22 np0005532761 podman[104550]: 2025-11-23 20:44:22.91563579 +0000 UTC m=+0.038807911 container died 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ff08e9448a4e6e871027af73888fe043d3ec51aa1595a432d7bb59a854964e3d-merged.mount: Deactivated successfully.
Nov 23 15:44:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:22.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:22 np0005532761 podman[104550]: 2025-11-23 20:44:22.944638247 +0000 UTC m=+0.067810368 container remove 5ea032b1c10a71b4f5f89d46224af307d043e6ccb5e0f88dc05ec8e09c983006 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:22 np0005532761 podman[104550]: 2025-11-23 20:44:22.948131617 +0000 UTC m=+0.071303758 volume remove 3bff29a126d41977d2ef68aaaf86ce768a3d5f4974f61dc4c54cf30b2407f331
Nov 23 15:44:22 np0005532761 bash[104550]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0
Nov 23 15:44:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v27: 337 pgs: 2 unknown, 2 peering, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 23 15:44:23 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@alertmanager.compute-0.service: Deactivated successfully.
Nov 23 15:44:23 np0005532761 systemd[1]: Stopped Ceph alertmanager.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:44:23 np0005532761 systemd[1]: Starting Ceph alertmanager.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:44:23 np0005532761 podman[104721]: 2025-11-23 20:44:23.256335881 +0000 UTC m=+0.032346615 volume create dd9a40b375179c802639f90d82aa358cb218ae841c74908a842fdbe1d227ac63
Nov 23 15:44:23 np0005532761 podman[104721]: 2025-11-23 20:44:23.262968212 +0000 UTC m=+0.038978946 container create 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9bc68b4b6fa446ae40b32c1f3e155dfcbc50dabc29aa0740da7dad7ea5c9fb/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9bc68b4b6fa446ae40b32c1f3e155dfcbc50dabc29aa0740da7dad7ea5c9fb/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:23 np0005532761 podman[104721]: 2025-11-23 20:44:23.311872292 +0000 UTC m=+0.087883216 container init 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:23 np0005532761 podman[104721]: 2025-11-23 20:44:23.316501371 +0000 UTC m=+0.092512105 container start 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:23 np0005532761 bash[104721]: 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754
Nov 23 15:44:23 np0005532761 podman[104721]: 2025-11-23 20:44:23.242597516 +0000 UTC m=+0.018608270 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 23 15:44:23 np0005532761 systemd[1]: Started Ceph alertmanager.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.348Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.348Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.355Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.362Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:23 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 23 15:44:23 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.416Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.417Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 23 15:44:23 np0005532761 python3.9[104738]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.420Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Nov 23 15:44:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:23.420Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 23 15:44:23 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Nov 23 15:44:23 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Nov 23 15:44:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 90 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=89/90 n=5 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] async=[0] r=0 lpr=89 pi=[65,89)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 90 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=89/90 n=6 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] async=[0] r=0 lpr=89 pi=[65,89)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:23 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 90 pg[6.b( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=89/90 n=1 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=89) [1] r=0 lpr=89 pi=[67,89)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:23 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:24 np0005532761 podman[104907]: 2025-11-23 20:44:24.027208307 +0000 UTC m=+0.062558922 container create f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade (image=quay.io/ceph/grafana:10.4.0, name=inspiring_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:24 np0005532761 systemd[1]: Started libpod-conmon-f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade.scope.
Nov 23 15:44:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:24 np0005532761 podman[104907]: 2025-11-23 20:44:24.010203019 +0000 UTC m=+0.045553664 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 23 15:44:24 np0005532761 podman[104907]: 2025-11-23 20:44:24.115442781 +0000 UTC m=+0.150793496 container init f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade (image=quay.io/ceph/grafana:10.4.0, name=inspiring_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:24 np0005532761 podman[104907]: 2025-11-23 20:44:24.1227101 +0000 UTC m=+0.158060755 container start f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade (image=quay.io/ceph/grafana:10.4.0, name=inspiring_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:24 np0005532761 inspiring_franklin[104969]: 472 0
Nov 23 15:44:24 np0005532761 systemd[1]: libpod-f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade.scope: Deactivated successfully.
Nov 23 15:44:24 np0005532761 podman[104907]: 2025-11-23 20:44:24.138412044 +0000 UTC m=+0.173762749 container attach f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade (image=quay.io/ceph/grafana:10.4.0, name=inspiring_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:24 np0005532761 podman[104907]: 2025-11-23 20:44:24.139867642 +0000 UTC m=+0.175218317 container died f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade (image=quay.io/ceph/grafana:10.4.0, name=inspiring_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-920af3928e840f6bb0e1dac8e9b56b9c31033a536fbdda8f6646fc702f9def4f-merged.mount: Deactivated successfully.
Nov 23 15:44:24 np0005532761 podman[104907]: 2025-11-23 20:44:24.432509514 +0000 UTC m=+0.467860149 container remove f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade (image=quay.io/ceph/grafana:10.4.0, name=inspiring_franklin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 23 15:44:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 23 15:44:24 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 23 15:44:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 91 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=89/90 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91 pruub=15.008217812s) [0] async=[0] r=-1 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 50'991 active pruub 243.467941284s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 91 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=89/90 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91 pruub=15.008139610s) [0] r=-1 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 243.467941284s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 91 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=89/90 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91 pruub=15.007257462s) [0] async=[0] r=-1 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 50'991 active pruub 243.467422485s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:24 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 91 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=89/90 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91 pruub=15.007213593s) [0] r=-1 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 243.467422485s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:24 np0005532761 python3.9[105008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:44:24 np0005532761 systemd[1]: libpod-conmon-f51e974b990d31f46705b88dc3a982c94366bfc99b7fa607f7046d5a3c0d4ade.scope: Deactivated successfully.
Nov 23 15:44:24 np0005532761 podman[105016]: 2025-11-23 20:44:24.484145895 +0000 UTC m=+0.028934187 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 23 15:44:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:24.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v30: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:25 np0005532761 python3.9[105180]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:44:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:25 np0005532761 podman[105016]: 2025-11-23 20:44:25.332136969 +0000 UTC m=+0.876925271 container create d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316 (image=quay.io/ceph/grafana:10.4.0, name=optimistic_shirley, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:25 np0005532761 ceph-mon[74569]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 23 15:44:25 np0005532761 ceph-mon[74569]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 23 15:44:25 np0005532761 network[105199]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:44:25 np0005532761 network[105200]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:44:25 np0005532761 network[105201]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:44:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:25.363Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.001051732s
Nov 23 15:44:25 np0005532761 systemd[1]: Started libpod-conmon-d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316.scope.
Nov 23 15:44:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:25 np0005532761 podman[105016]: 2025-11-23 20:44:25.414518742 +0000 UTC m=+0.959307034 container init d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316 (image=quay.io/ceph/grafana:10.4.0, name=optimistic_shirley, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:25 np0005532761 podman[105016]: 2025-11-23 20:44:25.421164334 +0000 UTC m=+0.965952646 container start d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316 (image=quay.io/ceph/grafana:10.4.0, name=optimistic_shirley, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:25 np0005532761 optimistic_shirley[105205]: 472 0
Nov 23 15:44:25 np0005532761 podman[105016]: 2025-11-23 20:44:25.424982763 +0000 UTC m=+0.969771035 container attach d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316 (image=quay.io/ceph/grafana:10.4.0, name=optimistic_shirley, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:25 np0005532761 podman[105016]: 2025-11-23 20:44:25.425245169 +0000 UTC m=+0.970033441 container died d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316 (image=quay.io/ceph/grafana:10.4.0, name=optimistic_shirley, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 23 15:44:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 23 15:44:25 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 23 15:44:26 np0005532761 systemd[1]: libpod-d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316.scope: Deactivated successfully.
Nov 23 15:44:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0097cc550505fda88c2e5922955412667aad4bfa93c79c3de250c82f75732873-merged.mount: Deactivated successfully.
Nov 23 15:44:26 np0005532761 podman[105016]: 2025-11-23 20:44:26.030732004 +0000 UTC m=+1.575520296 container remove d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316 (image=quay.io/ceph/grafana:10.4.0, name=optimistic_shirley, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:26 np0005532761 systemd[1]: libpod-conmon-d42fbe46e3533773e924d1d109dcd6c613681a759d55d2bdd355f9fd9fc51316.scope: Deactivated successfully.
Nov 23 15:44:26 np0005532761 systemd[1]: Stopping Ceph grafana.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=server t=2025-11-23T20:44:26.292562862Z level=info msg="Shutdown started" reason="System signal: terminated"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=ticker t=2025-11-23T20:44:26.293038585Z level=info msg=stopped last_tick=2025-11-23T20:44:20Z
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=tracing t=2025-11-23T20:44:26.293103516Z level=info msg="Closing tracing"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=grafana-apiserver t=2025-11-23T20:44:26.293473696Z level=info msg="StorageObjectCountTracker pruner is exiting"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[98958]: logger=sqlstore.transactions t=2025-11-23T20:44:26.304657984Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 23 15:44:26 np0005532761 podman[105271]: 2025-11-23 20:44:26.323071278 +0000 UTC m=+0.064582345 container died 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-098cccf79c49496c75e4e40d3effa42af15b94c0911522fe67bfd1d617ca8a90-merged.mount: Deactivated successfully.
Nov 23 15:44:26 np0005532761 podman[105271]: 2025-11-23 20:44:26.380267893 +0000 UTC m=+0.121778960 container remove 8a8cc8d6a4767d4c02dbeac229da6e76ef792904a61d3190e84ff4204a8e121b (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:26 np0005532761 bash[105271]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 23 15:44:26 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@grafana.compute-0.service: Deactivated successfully.
Nov 23 15:44:26 np0005532761 systemd[1]: Stopped Ceph grafana.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:44:26 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@grafana.compute-0.service: Consumed 3.744s CPU time.
Nov 23 15:44:26 np0005532761 systemd[1]: Starting Ceph grafana.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204426 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:44:26 np0005532761 podman[105395]: 2025-11-23 20:44:26.712822564 +0000 UTC m=+0.043090882 container create 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:26.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f7268ef4d9c6fa43ad52522504b5123a3f8f555266ec9e29cd3e33da5422c3/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f7268ef4d9c6fa43ad52522504b5123a3f8f555266ec9e29cd3e33da5422c3/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f7268ef4d9c6fa43ad52522504b5123a3f8f555266ec9e29cd3e33da5422c3/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f7268ef4d9c6fa43ad52522504b5123a3f8f555266ec9e29cd3e33da5422c3/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f7268ef4d9c6fa43ad52522504b5123a3f8f555266ec9e29cd3e33da5422c3/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:26 np0005532761 podman[105395]: 2025-11-23 20:44:26.77048678 +0000 UTC m=+0.100755108 container init 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:26 np0005532761 podman[105395]: 2025-11-23 20:44:26.77747629 +0000 UTC m=+0.107744598 container start 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:26 np0005532761 bash[105395]: 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb
Nov 23 15:44:26 np0005532761 podman[105395]: 2025-11-23 20:44:26.693508026 +0000 UTC m=+0.023776334 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 23 15:44:26 np0005532761 systemd[1]: Started Ceph grafana.compute-0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:26 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Nov 23 15:44:26 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:26 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Nov 23 15:44:26 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942155124Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-11-23T20:44:26Z
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942518244Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942538744Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942547224Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942554455Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942561155Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942568005Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942574685Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942581875Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942588715Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942595206Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942602186Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942616206Z level=info msg=Target target=[all]
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942628686Z level=info msg="Path Home" path=/usr/share/grafana
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942634967Z level=info msg="Path Data" path=/var/lib/grafana
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942641107Z level=info msg="Path Logs" path=/var/log/grafana
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942649077Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942655507Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=settings t=2025-11-23T20:44:26.942661957Z level=info msg="App mode production"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=sqlstore t=2025-11-23T20:44:26.943104518Z level=info msg="Connecting to DB" dbtype=sqlite3
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=sqlstore t=2025-11-23T20:44:26.943138129Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=migrator t=2025-11-23T20:44:26.943956161Z level=info msg="Starting DB migrations"
Nov 23 15:44:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=migrator t=2025-11-23T20:44:26.96298413Z level=info msg="migrations completed" performed=0 skipped=547 duration=596.015µs
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=sqlstore t=2025-11-23T20:44:26.964282594Z level=info msg="Created default organization"
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=secrets t=2025-11-23T20:44:26.964712935Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Nov 23 15:44:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=plugin.store t=2025-11-23T20:44:26.986193889Z level=info msg="Loading plugins..."
Nov 23 15:44:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v33: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=local.finder t=2025-11-23T20:44:27.079129044Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=plugin.store t=2025-11-23T20:44:27.079162975Z level=info msg="Plugins loaded" count=55 duration=92.973196ms
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=query_data t=2025-11-23T20:44:27.081766162Z level=info msg="Query Service initialization"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=live.push_http t=2025-11-23T20:44:27.085351845Z level=info msg="Live Push Gateway initialization"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=ngalert.migration t=2025-11-23T20:44:27.088553247Z level=info msg=Starting
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=ngalert.state.manager t=2025-11-23T20:44:27.10339301Z level=info msg="Running in alternative execution of Error/NoData mode"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=infra.usagestats.collector t=2025-11-23T20:44:27.106461569Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=provisioning.datasources t=2025-11-23T20:44:27.109889987Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=provisioning.alerting t=2025-11-23T20:44:27.142194509Z level=info msg="starting to provision alerting"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=provisioning.alerting t=2025-11-23T20:44:27.14222566Z level=info msg="finished to provision alerting"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=ngalert.state.manager t=2025-11-23T20:44:27.142326533Z level=info msg="Warming state cache for startup"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafanaStorageLogger t=2025-11-23T20:44:27.142665042Z level=info msg="Storage starting"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=ngalert.multiorg.alertmanager t=2025-11-23T20:44:27.142668652Z level=info msg="Starting MultiOrg Alertmanager"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=http.server t=2025-11-23T20:44:27.144777976Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=http.server t=2025-11-23T20:44:27.145217947Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=ngalert.state.manager t=2025-11-23T20:44:27.192850475Z level=info msg="State cache has been initialized" states=0 duration=50.521472ms
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=ngalert.scheduler t=2025-11-23T20:44:27.192907046Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=ticker t=2025-11-23T20:44:27.192977398Z level=info msg=starting first_tick=2025-11-23T20:44:30Z
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=provisioning.dashboard t=2025-11-23T20:44:27.197791112Z level=info msg="starting to provision dashboards"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=plugins.update.checker t=2025-11-23T20:44:27.201637622Z level=info msg="Update check succeeded" duration=59.091864ms
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafana.update.checker t=2025-11-23T20:44:27.206569308Z level=info msg="Update check succeeded" duration=64.155313ms
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=provisioning.dashboard t=2025-11-23T20:44:27.219209754Z level=info msg="finished to provision dashboards"
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Nov 23 15:44:27 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:27 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Nov 23 15:44:27 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:27] "GET /metrics HTTP/1.1" 200 48292 "" "Prometheus/2.51.0"
Nov 23 15:44:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:27] "GET /metrics HTTP/1.1" 200 48292 "" "Prometheus/2.51.0"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafana-apiserver t=2025-11-23T20:44:27.798645978Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Nov 23 15:44:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafana-apiserver t=2025-11-23T20:44:27.799283875Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: Reconfiguring osd.0 (monmap changed)...
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: Reconfiguring daemon osd.0 on compute-1
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 23 15:44:28 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:44:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:44:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:28.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Nov 23 15:44:28 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Nov 23 15:44:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v34: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.jtkauz (monmap changed)...
Nov 23 15:44:29 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.jtkauz (monmap changed)...
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:29 np0005532761 ceph-mgr[74869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:44:29 np0005532761 ceph-mgr[74869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:44:29 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 23 15:44:29 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:44:29 np0005532761 python3.9[105654]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO root] Restarting engine...
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:30] ENGINE Bus STOPPING
Nov 23 15:44:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:30] ENGINE Bus STOPPING
Nov 23 15:44:30 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 23 15:44:30 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 23 15:44:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:30] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Nov 23 15:44:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:30] ENGINE Bus STOPPED
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:30] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Nov 23 15:44:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:30] ENGINE Bus STARTING
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:30] ENGINE Bus STOPPED
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:30] ENGINE Bus STARTING
Nov 23 15:44:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:30.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: Reconfiguring mgr.compute-2.jtkauz (monmap changed)...
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: Reconfiguring daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 23 15:44:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532761 python3.9[105820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:44:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:30] ENGINE Serving on http://:::9283
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:30] ENGINE Serving on http://:::9283
Nov 23 15:44:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: [23/Nov/2025:20:44:30] ENGINE Bus STARTED
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:30] ENGINE Bus STARTED
Nov 23 15:44:30 np0005532761 ceph-mgr[74869]: [prometheus INFO root] Engine started.
Nov 23 15:44:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:44:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:30.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:44:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v35: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 15:44:31 np0005532761 podman[105967]: 2025-11-23 20:44:31.171629858 +0000 UTC m=+0.056081886 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:44:31 np0005532761 podman[105967]: 2025-11-23 20:44:31.268457514 +0000 UTC m=+0.152909542 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:31 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 23 15:44:31 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 23 15:44:31 np0005532761 podman[106087]: 2025-11-23 20:44:31.68191693 +0000 UTC m=+0.053757238 container exec c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:31 np0005532761 podman[106087]: 2025-11-23 20:44:31.710488756 +0000 UTC m=+0.082329053 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 15:44:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 15:44:32 np0005532761 podman[106352]: 2025-11-23 20:44:32.194747667 +0000 UTC m=+0.047635908 container exec cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:44:32 np0005532761 python3.9[106294]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:44:32 np0005532761 podman[106352]: 2025-11-23 20:44:32.200978007 +0000 UTC m=+0.053866058 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:44:32 np0005532761 podman[106422]: 2025-11-23 20:44:32.393420098 +0000 UTC m=+0.053711076 container exec 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived)
Nov 23 15:44:32 np0005532761 podman[106422]: 2025-11-23 20:44:32.406049623 +0000 UTC m=+0.066340591 container exec_died 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Nov 23 15:44:32 np0005532761 podman[106509]: 2025-11-23 20:44:32.591168055 +0000 UTC m=+0.042886547 container exec 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:32 np0005532761 podman[106509]: 2025-11-23 20:44:32.619193807 +0000 UTC m=+0.070912269 container exec_died 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:32 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Nov 23 15:44:32 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Nov 23 15:44:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:32.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 23 15:44:32 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 15:44:32 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 15:44:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 23 15:44:32 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 23 15:44:32 np0005532761 podman[106582]: 2025-11-23 20:44:32.804834503 +0000 UTC m=+0.047236789 container exec 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:32 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 1.
Nov 23 15:44:32 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:44:32 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.544s CPU time.
Nov 23 15:44:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:32 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:44:32 np0005532761 podman[106582]: 2025-11-23 20:44:32.995185189 +0000 UTC m=+0.237587455 container exec_died 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v38: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:44:33 np0005532761 podman[106819]: 2025-11-23 20:44:33.189154918 +0000 UTC m=+0.046066719 container create a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:44:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f1b079a02b35a3f6878250c82d5647ee5807e4f4195ff56a817f6f3a3ba75/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f1b079a02b35a3f6878250c82d5647ee5807e4f4195ff56a817f6f3a3ba75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f1b079a02b35a3f6878250c82d5647ee5807e4f4195ff56a817f6f3a3ba75/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f1b079a02b35a3f6878250c82d5647ee5807e4f4195ff56a817f6f3a3ba75/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:33 np0005532761 podman[106819]: 2025-11-23 20:44:33.245228323 +0000 UTC m=+0.102140144 container init a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:33 np0005532761 podman[106819]: 2025-11-23 20:44:33.25015379 +0000 UTC m=+0.107065591 container start a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 15:44:33 np0005532761 bash[106819]: a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be
Nov 23 15:44:33 np0005532761 podman[106819]: 2025-11-23 20:44:33.172328134 +0000 UTC m=+0.029239965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:44:33 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:44:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:44:33.367Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.005029007s
Nov 23 15:44:33 np0005532761 podman[106907]: 2025-11-23 20:44:33.371149568 +0000 UTC m=+0.054820664 container exec 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:33 np0005532761 podman[106907]: 2025-11-23 20:44:33.405357229 +0000 UTC m=+0.089028315 container exec_died 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:33 np0005532761 python3.9[106835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v39: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 23 15:44:33 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 23 15:44:33 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 23 15:44:34 np0005532761 podman[107085]: 2025-11-23 20:44:34.008451523 +0000 UTC m=+0.044940419 container create 2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 23 15:44:34 np0005532761 systemd[1]: Started libpod-conmon-2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec.scope.
Nov 23 15:44:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:34 np0005532761 podman[107085]: 2025-11-23 20:44:33.988344195 +0000 UTC m=+0.024833101 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:34 np0005532761 podman[107085]: 2025-11-23 20:44:34.104177411 +0000 UTC m=+0.140666387 container init 2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:34 np0005532761 podman[107085]: 2025-11-23 20:44:34.111380576 +0000 UTC m=+0.147869492 container start 2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Nov 23 15:44:34 np0005532761 podman[107085]: 2025-11-23 20:44:34.115206064 +0000 UTC m=+0.151694980 container attach 2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 15:44:34 np0005532761 nervous_agnesi[107140]: 167 167
Nov 23 15:44:34 np0005532761 systemd[1]: libpod-2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec.scope: Deactivated successfully.
Nov 23 15:44:34 np0005532761 conmon[107140]: conmon 2f9c7b00c7c85aec9700 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec.scope/container/memory.events
Nov 23 15:44:34 np0005532761 podman[107085]: 2025-11-23 20:44:34.118350396 +0000 UTC m=+0.154839292 container died 2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e11a31ac61a2239cc7685e4110f689d1b02ae6c7e2d0996e8199207ae7cb30e9-merged.mount: Deactivated successfully.
Nov 23 15:44:34 np0005532761 podman[107085]: 2025-11-23 20:44:34.168283512 +0000 UTC m=+0.204772438 container remove 2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 15:44:34 np0005532761 systemd[1]: libpod-conmon-2f9c7b00c7c85aec9700cf3fdc32e9207e90ec59f3703ff8a4f448d01c1717ec.scope: Deactivated successfully.
Nov 23 15:44:34 np0005532761 podman[107177]: 2025-11-23 20:44:34.343628452 +0000 UTC m=+0.042355664 container create 081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:44:34 np0005532761 python3.9[107155]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:44:34 np0005532761 systemd[1]: Started libpod-conmon-081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75.scope.
Nov 23 15:44:34 np0005532761 podman[107177]: 2025-11-23 20:44:34.325362181 +0000 UTC m=+0.024089423 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1d93a033195938c866f2367e8c40ebe50b92b54c6ff2d8c889e30479e9e99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1d93a033195938c866f2367e8c40ebe50b92b54c6ff2d8c889e30479e9e99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1d93a033195938c866f2367e8c40ebe50b92b54c6ff2d8c889e30479e9e99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1d93a033195938c866f2367e8c40ebe50b92b54c6ff2d8c889e30479e9e99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1d93a033195938c866f2367e8c40ebe50b92b54c6ff2d8c889e30479e9e99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:34 np0005532761 podman[107177]: 2025-11-23 20:44:34.443178455 +0000 UTC m=+0.141905677 container init 081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:44:34 np0005532761 podman[107177]: 2025-11-23 20:44:34.450568794 +0000 UTC m=+0.149296006 container start 081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:34 np0005532761 podman[107177]: 2025-11-23 20:44:34.454968152 +0000 UTC m=+0.153695374 container attach 081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_clarke, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 23 15:44:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:34.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 96 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=8 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.792305946s) [0] r=-1 lpr=96 pi=[73,96)/1 crt=50'991 mlcod 0'0 active pruub 253.554077148s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 96 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=5 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.791864395s) [0] r=-1 lpr=96 pi=[73,96)/1 crt=50'991 mlcod 0'0 active pruub 253.554046631s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 96 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=8 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.792052269s) [0] r=-1 lpr=96 pi=[73,96)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 253.554077148s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 96 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=5 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.791745186s) [0] r=-1 lpr=96 pi=[73,96)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 253.554046631s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:34 np0005532761 nice_clarke[107196]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:44:34 np0005532761 nice_clarke[107196]: --> All data devices are unavailable
Nov 23 15:44:34 np0005532761 systemd[1]: libpod-081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75.scope: Deactivated successfully.
Nov 23 15:44:34 np0005532761 conmon[107196]: conmon 081aba2ee11e175ae70b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75.scope/container/memory.events
Nov 23 15:44:34 np0005532761 podman[107177]: 2025-11-23 20:44:34.805907748 +0000 UTC m=+0.504634970 container died 081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 23 15:44:34 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 97 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=8 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=0 lpr=97 pi=[73,97)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 97 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=8 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=0 lpr=97 pi=[73,97)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 97 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=5 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=0 lpr=97 pi=[73,97)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 97 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=73/74 n=5 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=0 lpr=97 pi=[73,97)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0ee1d93a033195938c866f2367e8c40ebe50b92b54c6ff2d8c889e30479e9e99-merged.mount: Deactivated successfully.
Nov 23 15:44:34 np0005532761 podman[107177]: 2025-11-23 20:44:34.857038675 +0000 UTC m=+0.555765887 container remove 081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_clarke, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 15:44:34 np0005532761 systemd[1]: libpod-conmon-081aba2ee11e175ae70bc5835f61c3606c4b3903283d7709177bfadd694fbe75.scope: Deactivated successfully.
Nov 23 15:44:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:35 np0005532761 podman[107337]: 2025-11-23 20:44:35.422654174 +0000 UTC m=+0.036000310 container create eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:44:35 np0005532761 systemd[1]: Started libpod-conmon-eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9.scope.
Nov 23 15:44:35 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v42: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 15:44:35 np0005532761 podman[107337]: 2025-11-23 20:44:35.407345862 +0000 UTC m=+0.020692018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:35 np0005532761 podman[107337]: 2025-11-23 20:44:35.510875351 +0000 UTC m=+0.124221487 container init eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hypatia, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:35 np0005532761 podman[107337]: 2025-11-23 20:44:35.516962626 +0000 UTC m=+0.130308762 container start eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hypatia, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:44:35 np0005532761 podman[107337]: 2025-11-23 20:44:35.521015034 +0000 UTC m=+0.134361200 container attach eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hypatia, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:44:35 np0005532761 silly_hypatia[107357]: 167 167
Nov 23 15:44:35 np0005532761 podman[107337]: 2025-11-23 20:44:35.522098264 +0000 UTC m=+0.135444400 container died eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hypatia, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:35 np0005532761 systemd[1]: libpod-eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9.scope: Deactivated successfully.
Nov 23 15:44:35 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2266e69d805a04a500e3e847da6cd066f5cba9010841635458ce6e4791db3d30-merged.mount: Deactivated successfully.
Nov 23 15:44:35 np0005532761 podman[107337]: 2025-11-23 20:44:35.567224749 +0000 UTC m=+0.180570885 container remove eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hypatia, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:35 np0005532761 systemd[1]: libpod-conmon-eac631f74e0026ed0e713ea9342853bbea99fecb07a05c7853fe074d2d5b2fd9.scope: Deactivated successfully.
Nov 23 15:44:35 np0005532761 podman[107387]: 2025-11-23 20:44:35.708831545 +0000 UTC m=+0.040843952 container create f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_allen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 15:44:35 np0005532761 systemd[1]: Started libpod-conmon-f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020.scope.
Nov 23 15:44:35 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eadcd25a5153a63cf9bdd84911ae5351450e4acdd28d4813e268bfde3664c920/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:35 np0005532761 podman[107387]: 2025-11-23 20:44:35.691725164 +0000 UTC m=+0.023737601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eadcd25a5153a63cf9bdd84911ae5351450e4acdd28d4813e268bfde3664c920/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eadcd25a5153a63cf9bdd84911ae5351450e4acdd28d4813e268bfde3664c920/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eadcd25a5153a63cf9bdd84911ae5351450e4acdd28d4813e268bfde3664c920/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:35 np0005532761 podman[107387]: 2025-11-23 20:44:35.809208779 +0000 UTC m=+0.141221216 container init f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:44:35 np0005532761 podman[107387]: 2025-11-23 20:44:35.815455168 +0000 UTC m=+0.147467575 container start f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:35 np0005532761 podman[107387]: 2025-11-23 20:44:35.818827508 +0000 UTC m=+0.150839935 container attach f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_allen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 23 15:44:35 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 23 15:44:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 98 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=76/77 n=1 ec=53/18 lis/c=76/76 les/c/f=77/77/0 sis=98 pruub=10.322769165s) [0] r=-1 lpr=98 pi=[76,98)/1 crt=49'39 mlcod 49'39 active pruub 250.185165405s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 98 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=76/77 n=1 ec=53/18 lis/c=76/76 les/c/f=77/77/0 sis=98 pruub=10.322730064s) [0] r=-1 lpr=98 pi=[76,98)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 250.185165405s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 98 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=97/98 n=5 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] async=[0] r=0 lpr=97 pi=[73,97)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:35 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 98 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=97/98 n=8 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] async=[0] r=0 lpr=97 pi=[73,97)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:36 np0005532761 elastic_allen[107408]: {
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:    "1": [
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:        {
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "devices": [
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "/dev/loop3"
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            ],
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "lv_name": "ceph_lv0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "lv_size": "21470642176",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "name": "ceph_lv0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "tags": {
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.cluster_name": "ceph",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.crush_device_class": "",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.encrypted": "0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.osd_id": "1",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.type": "block",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.vdo": "0",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:                "ceph.with_tpm": "0"
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            },
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "type": "block",
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:            "vg_name": "ceph_vg0"
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:        }
Nov 23 15:44:36 np0005532761 elastic_allen[107408]:    ]
Nov 23 15:44:36 np0005532761 elastic_allen[107408]: }
Nov 23 15:44:36 np0005532761 systemd[1]: libpod-f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020.scope: Deactivated successfully.
Nov 23 15:44:36 np0005532761 podman[107387]: 2025-11-23 20:44:36.126693583 +0000 UTC m=+0.458706030 container died f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_allen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:36 np0005532761 systemd[1]: var-lib-containers-storage-overlay-eadcd25a5153a63cf9bdd84911ae5351450e4acdd28d4813e268bfde3664c920-merged.mount: Deactivated successfully.
Nov 23 15:44:36 np0005532761 podman[107387]: 2025-11-23 20:44:36.171966303 +0000 UTC m=+0.503978720 container remove f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_allen, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:36 np0005532761 systemd[1]: libpod-conmon-f8abf27bbf98bb953ad429622eaa43f14db639ac467f8e5c6914e3b8c7747020.scope: Deactivated successfully.
Nov 23 15:44:36 np0005532761 podman[107566]: 2025-11-23 20:44:36.708701495 +0000 UTC m=+0.042053284 container create d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shirley, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:36 np0005532761 systemd[1]: Started libpod-conmon-d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d.scope.
Nov 23 15:44:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:36.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:36 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:36 np0005532761 podman[107566]: 2025-11-23 20:44:36.690979667 +0000 UTC m=+0.024331476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:36 np0005532761 podman[107566]: 2025-11-23 20:44:36.78761579 +0000 UTC m=+0.120967599 container init d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shirley, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:36 np0005532761 podman[107566]: 2025-11-23 20:44:36.795257777 +0000 UTC m=+0.128609566 container start d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shirley, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:44:36 np0005532761 systemd[1]: libpod-d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d.scope: Deactivated successfully.
Nov 23 15:44:36 np0005532761 condescending_shirley[107586]: 167 167
Nov 23 15:44:36 np0005532761 podman[107566]: 2025-11-23 20:44:36.800880198 +0000 UTC m=+0.134232007 container attach d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shirley, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 15:44:36 np0005532761 podman[107566]: 2025-11-23 20:44:36.801095384 +0000 UTC m=+0.134447173 container died d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shirley, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:44:36 np0005532761 systemd[1]: var-lib-containers-storage-overlay-cacab05351ce041eb234df44ca12ddf53b9bb06724e52c627a8671ed634ba426-merged.mount: Deactivated successfully.
Nov 23 15:44:36 np0005532761 podman[107566]: 2025-11-23 20:44:36.842950562 +0000 UTC m=+0.176302351 container remove d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shirley, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:44:36 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 15:44:36 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 15:44:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 23 15:44:36 np0005532761 systemd[1]: libpod-conmon-d008b222f7ddd40608e408d65a28d55a2bf63331ff1d58ea9503f0fd1f13494d.scope: Deactivated successfully.
Nov 23 15:44:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 23 15:44:36 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 23 15:44:36 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 99 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=97/98 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=14.981931686s) [0] async=[0] r=-1 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 50'991 active pruub 255.869827271s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:36 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 99 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=97/98 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=14.981756210s) [0] r=-1 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 255.869827271s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:36 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 99 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=97/98 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=14.981672287s) [0] async=[0] r=-1 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 50'991 active pruub 255.869842529s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:36 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 99 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=97/98 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=14.981595039s) [0] r=-1 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 255.869842529s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:36.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:37 np0005532761 podman[107610]: 2025-11-23 20:44:37.000625239 +0000 UTC m=+0.049930235 container create 1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wilbur, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:44:37 np0005532761 systemd[1]: Started libpod-conmon-1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3.scope.
Nov 23 15:44:37 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:44:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b90869a6e3b263d855c7c34e34aed36737d299bed13e1520d027c5bf93ce5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:37 np0005532761 podman[107610]: 2025-11-23 20:44:36.976234853 +0000 UTC m=+0.025539909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b90869a6e3b263d855c7c34e34aed36737d299bed13e1520d027c5bf93ce5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b90869a6e3b263d855c7c34e34aed36737d299bed13e1520d027c5bf93ce5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:37 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b90869a6e3b263d855c7c34e34aed36737d299bed13e1520d027c5bf93ce5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:44:37 np0005532761 podman[107610]: 2025-11-23 20:44:37.089938036 +0000 UTC m=+0.139243012 container init 1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 15:44:37 np0005532761 podman[107610]: 2025-11-23 20:44:37.095684051 +0000 UTC m=+0.144989017 container start 1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wilbur, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 15:44:37 np0005532761 podman[107610]: 2025-11-23 20:44:37.098721202 +0000 UTC m=+0.148026198 container attach 1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:44:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v45: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 15:44:37 np0005532761 lvm[107706]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:44:37 np0005532761 lvm[107706]: VG ceph_vg0 finished
Nov 23 15:44:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:37] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Nov 23 15:44:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:37] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Nov 23 15:44:37 np0005532761 crazy_wilbur[107627]: {}
Nov 23 15:44:37 np0005532761 systemd[1]: libpod-1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3.scope: Deactivated successfully.
Nov 23 15:44:37 np0005532761 systemd[1]: libpod-1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3.scope: Consumed 1.131s CPU time.
Nov 23 15:44:37 np0005532761 podman[107610]: 2025-11-23 20:44:37.815213677 +0000 UTC m=+0.864518663 container died 1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 23 15:44:37 np0005532761 systemd[1]: var-lib-containers-storage-overlay-31b90869a6e3b263d855c7c34e34aed36737d299bed13e1520d027c5bf93ce5e-merged.mount: Deactivated successfully.
Nov 23 15:44:37 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 23 15:44:37 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 100 pg[6.f( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=100) [1] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:37 np0005532761 podman[107610]: 2025-11-23 20:44:37.979872934 +0000 UTC m=+1.029177900 container remove 1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 15:44:37 np0005532761 systemd[1]: libpod-conmon-1203035ad4f17eeb39b4444ecbc147c0fc69f51b0212a0171a2bab6cc2bfbcd3.scope: Deactivated successfully.
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:38.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 23 15:44:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:38.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:38 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 23 15:44:38 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 101 pg[6.f( v 49'39 lc 48'1 (0'0,49'39] local-lis/les=100/101 n=3 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=100) [1] r=0 lpr=100 pi=[67,100)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:44:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:44:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v48: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1/225 objects misplaced (0.444%)
Nov 23 15:44:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 23 15:44:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 23 15:44:39 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 23 15:44:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 23 15:44:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 23 15:44:40 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 23 15:44:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:40.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 23 15:44:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 23 15:44:41 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 23 15:44:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v52: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1/225 objects misplaced (0.444%)
Nov 23 15:44:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204442 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:44:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:42.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:42.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1/225 objects misplaced (0.444%)
Nov 23 15:44:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:44.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:44.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000006:nfs.cephfs.2: -2
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:44:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v54: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Nov 23 15:44:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 23 15:44:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 23 15:44:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 23 15:44:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 23 15:44:46 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 23 15:44:46 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 23 15:44:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:46 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:44:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:44:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:46.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:47 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 23 15:44:47 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 23 15:44:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 23 15:44:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 23 15:44:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Nov 23 15:44:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 23 15:44:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:47] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Nov 23 15:44:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:47] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Nov 23 15:44:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:47 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:44:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:44:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204448 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:44:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:48 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:48.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:48.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:49 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 23 15:44:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 23 15:44:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 23 15:44:49 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 23 15:44:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Nov 23 15:44:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 23 15:44:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:49 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 23 15:44:50 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 109 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=65/66 n=4 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=109 pruub=12.904886246s) [2] r=-1 lpr=109 pi=[65,109)/1 crt=50'991 mlcod 0'0 active pruub 267.177307129s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:50 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 109 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=65/66 n=4 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=109 pruub=12.904848099s) [2] r=-1 lpr=109 pi=[65,109)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 267.177307129s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 23 15:44:50 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 23 15:44:50 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 110 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=65/66 n=4 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=110) [2]/[1] r=0 lpr=110 pi=[65,110)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:50 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 110 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=65/66 n=4 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=110) [2]/[1] r=0 lpr=110 pi=[65,110)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:50 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:50.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:50.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:51 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 23 15:44:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Nov 23 15:44:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 23 15:44:51 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 23 15:44:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 23 15:44:51 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 23 15:44:51 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 111 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=110/111 n=4 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=110) [2]/[1] async=[2] r=0 lpr=110 pi=[65,110)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:51 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:52 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 23 15:44:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 23 15:44:52 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 23 15:44:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 23 15:44:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:52 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:52 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 23 15:44:52 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 112 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=112) [1] r=0 lpr=112 pi=[63,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:52 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 112 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=110/111 n=4 ec=57/44 lis/c=110/65 les/c/f=111/66/0 sis=112 pruub=15.005265236s) [2] async=[2] r=-1 lpr=112 pi=[65,112)/1 crt=50'991 mlcod 50'991 active pruub 271.680755615s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:52 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 112 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=110/111 n=4 ec=57/44 lis/c=110/65 les/c/f=111/66/0 sis=112 pruub=15.005118370s) [2] r=-1 lpr=112 pi=[65,112)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 271.680755615s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:52.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:44:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:44:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:53 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v66: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 23 15:44:53 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 113 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[63,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:53 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 113 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[63,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 23 15:44:53 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 23 15:44:53 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 113 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=113) [1] r=0 lpr=113 pi=[72,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:53 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:54 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 23 15:44:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 23 15:44:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 23 15:44:54 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 23 15:44:54 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 114 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[72,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:54 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 114 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[72,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:54.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:55 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 23 15:44:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 23 15:44:55 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 23 15:44:55 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 115 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=113/63 les/c/f=114/64/0 sis=115) [1] r=0 lpr=115 pi=[63,115)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:55 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 115 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=113/63 les/c/f=114/64/0 sis=115) [1] r=0 lpr=115 pi=[63,115)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Nov 23 15:44:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:55 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 23 15:44:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 23 15:44:56 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 23 15:44:56 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 116 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=114/72 les/c/f=115/73/0 sis=116) [1] r=0 lpr=116 pi=[72,116)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:56 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 116 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=114/72 les/c/f=115/73/0 sis=116) [1] r=0 lpr=116 pi=[72,116)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:56 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 116 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=115/116 n=5 ec=57/44 lis/c=113/63 les/c/f=114/64/0 sis=115) [1] r=0 lpr=115 pi=[63,115)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:56 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:57 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:57 np0005532761 ceph-mgr[74869]: [dashboard INFO request] [192.168.122.100:44944] [POST] [200] [0.118s] [4.0B] [3e17798e-8e62-4808-8415-057c99f7be45] /api/prometheus_receiver
Nov 23 15:44:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 23 15:44:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 23 15:44:57 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 23 15:44:57 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 117 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=116/117 n=5 ec=57/44 lis/c=114/72 les/c/f=115/73/0 sis=116) [1] r=0 lpr=116 pi=[72,116)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Nov 23 15:44:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:57] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:44:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:44:57] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:44:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:57 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:58 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:44:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:59.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:59 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 638 B/s rd, 0 op/s; 22 B/s, 2 objects/s recovering
Nov 23 15:44:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Nov 23 15:44:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 23 15:44:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 23 15:44:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 23 15:44:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 23 15:44:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 23 15:44:59 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 23 15:44:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:44:59 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:45:00 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 23 15:45:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:00 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:00.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:01 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Nov 23 15:45:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Nov 23 15:45:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 23 15:45:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 23 15:45:01 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 23 15:45:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 23 15:45:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 23 15:45:01 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 23 15:45:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:01 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:02 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 23 15:45:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:02 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:02.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:03 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:45:03
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.control', '.mgr', 'backups', 'images', '.nfs']
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:45:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 499 B/s rd, 0 op/s; 17 B/s, 1 objects/s recovering
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 23 15:45:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 23 15:45:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:03 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 23 15:45:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:04 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:05 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v80: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 23 15:45:05 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 23 15:45:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:05 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:06 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:06 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 23 15:45:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:06.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:06.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:06.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:45:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:06.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:45:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:07 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:45:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Nov 23 15:45:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 23 15:45:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 23 15:45:07 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 23 15:45:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 23 15:45:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 23 15:45:07 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 23 15:45:07 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 122 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=122) [1] r=0 lpr=122 pi=[88,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:45:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:45:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:07 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:08 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 23 15:45:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 23 15:45:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 23 15:45:08 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 23 15:45:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 123 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[88,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:08 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 123 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[88,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:09.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:09 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Nov 23 15:45:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 23 15:45:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 23 15:45:09 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 23 15:45:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:09 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=infra.usagestats t=2025-11-23T20:45:10.168358075Z level=info msg="Usage stats are ready to report"
Nov 23 15:45:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 23 15:45:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 23 15:45:10 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 23 15:45:10 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 125 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=0/0 n=7 ec=57/44 lis/c=123/88 les/c/f=124/89/0 sis=125) [1] r=0 lpr=125 pi=[88,125)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:10 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 125 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=0/0 n=7 ec=57/44 lis/c=123/88 les/c/f=124/89/0 sis=125) [1] r=0 lpr=125 pi=[88,125)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:10 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:10.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:11.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:11 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 23 15:45:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 23 15:45:11 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 23 15:45:11 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 126 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=125/126 n=7 ec=57/44 lis/c=123/88 les/c/f=124/89/0 sis=125) [1] r=0 lpr=125 pi=[88,125)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:45:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v89: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 806 B/s rd, 0 op/s
Nov 23 15:45:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:11 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:12 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:12.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:13 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 640 B/s rd, 0 op/s
Nov 23 15:45:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:13 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:14 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:14.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:15 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 23 15:45:15 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 23 15:45:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:15 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:16 np0005532761 python3.9[108062]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:45:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:16 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:16 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 23 15:45:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:16.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:16.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:16.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:45:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:17 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v93: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 23 15:45:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Nov 23 15:45:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 23 15:45:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 23 15:45:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 23 15:45:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:45:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:45:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 23 15:45:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 23 15:45:17 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 23 15:45:17 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 128 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=128) [1] r=0 lpr=128 pi=[92,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:17 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:45:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:45:18 np0005532761 python3.9[108380]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 23 15:45:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:18 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 23 15:45:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 23 15:45:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 23 15:45:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:18.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:18 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 23 15:45:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 129 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] r=-1 lpr=129 pi=[92,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:18 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 129 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] r=-1 lpr=129 pi=[92,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:19 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:19 np0005532761 python3.9[108534]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 23 15:45:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Nov 23 15:45:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 23 15:45:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 23 15:45:19 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 23 15:45:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:19 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:20 np0005532761 python3.9[108686]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:45:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 23 15:45:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 23 15:45:20 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 23 15:45:20 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 131 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=2 ec=57/44 lis/c=129/92 les/c/f=130/93/0 sis=131) [1] r=0 lpr=131 pi=[92,131)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:20 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 131 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=2 ec=57/44 lis/c=129/92 les/c/f=130/93/0 sis=131) [1] r=0 lpr=131 pi=[92,131)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:20 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:20.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:21 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:21.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:21 np0005532761 python3.9[108840]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 23 15:45:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 23 15:45:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 23 15:45:21 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 23 15:45:21 np0005532761 ceph-osd[83114]: osd.1 pg_epoch: 132 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=131/132 n=2 ec=57/44 lis/c=129/92 les/c/f=130/93/0 sis=131) [1] r=0 lpr=131 pi=[92,131)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:45:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 817 B/s rd, 0 op/s
Nov 23 15:45:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:21 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:22 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:22 np0005532761 python3.9[108993]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:22.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:23 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:23.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v101: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 0 op/s
Nov 23 15:45:23 np0005532761 python3.9[109146]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:45:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:23 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:24 np0005532761 python3.9[109224]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:45:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:24 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:24.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:25 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:25.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v102: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 23 15:45:25 np0005532761 python3.9[109378]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 23 15:45:25 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 23 15:45:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:25 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204526 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:45:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 23 15:45:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:26 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:26.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:26.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:45:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:26.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:26.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:27 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:27.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:27 np0005532761 python3.9[109536]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 23 15:45:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v104: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 23 15:45:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Nov 23 15:45:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 23 15:45:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:45:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:45:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 23 15:45:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 23 15:45:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:27 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:27 np0005532761 python3.9[109689]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 23 15:45:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 23 15:45:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 23 15:45:28 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 23 15:45:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:28 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:28.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:28 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 23 15:45:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:29 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:29 np0005532761 python3.9[109843]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:45:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Nov 23 15:45:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Nov 23 15:45:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 23 15:45:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:29 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:29 np0005532761 python3.9[109998]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 23 15:45:30 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 23 15:45:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:30 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:30.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:31 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 23 15:45:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:31 np0005532761 python3.9[110152]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:45:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 23 15:45:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 23 15:45:31 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 23 15:45:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v110: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 15:45:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 23 15:45:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:45:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:31 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 23 15:45:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:45:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 23 15:45:32 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 23 15:45:32 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:45:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:32 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:32.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0911178a90>)]
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0911178a00>)]
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 23 15:45:33 np0005532761 python3.9[110307]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 23 15:45:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.660979) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733661008, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2891, "num_deletes": 252, "total_data_size": 7031990, "memory_usage": 7342096, "flush_reason": "Manual Compaction"}
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733771010, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6620082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7917, "largest_seqno": 10807, "table_properties": {"data_size": 6606243, "index_size": 8861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 34783, "raw_average_key_size": 22, "raw_value_size": 6576054, "raw_average_value_size": 4270, "num_data_blocks": 384, "num_entries": 1540, "num_filter_entries": 1540, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930629, "oldest_key_time": 1763930629, "file_creation_time": 1763930733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 110078 microseconds, and 10409 cpu microseconds.
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.771052) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6620082 bytes OK
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.771071) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.777570) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.777607) EVENT_LOG_v1 {"time_micros": 1763930733777592, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.777627) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7018605, prev total WAL file size 7018605, number of live WAL files 2.
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.779175) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6464KB)], [23(11MB)]
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733779213, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18327362, "oldest_snapshot_seqno": -1}
Nov 23 15:45:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:33 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4073 keys, 13907765 bytes, temperature: kUnknown
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733930734, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 13907765, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13875160, "index_size": 21295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104040, "raw_average_key_size": 25, "raw_value_size": 13795247, "raw_average_value_size": 3386, "num_data_blocks": 915, "num_entries": 4073, "num_filter_entries": 4073, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763930733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.931168) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13907765 bytes
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.973495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.8 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.3, 11.2 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(4.9) write-amplify(2.1) OK, records in: 4611, records dropped: 538 output_compression: NoCompression
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.973531) EVENT_LOG_v1 {"time_micros": 1763930733973518, "job": 8, "event": "compaction_finished", "compaction_time_micros": 151743, "compaction_time_cpu_micros": 26662, "output_level": 6, "num_output_files": 1, "total_output_size": 13907765, "num_input_records": 4611, "num_output_records": 4073, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733974973, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733977077, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.779105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.977203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.977209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.977212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.977214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:33 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:45:33.977216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:34 np0005532761 python3.9[110459]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:45:34 np0005532761 python3.9[110538]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 23 15:45:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 23 15:45:34 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 23 15:45:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:34 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:34 : epoch 69237231 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:45:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:34.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:35 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:35.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 23 15:45:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 23 15:45:35 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 23 15:45:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 0 B/s, 1 objects/s recovering
Nov 23 15:45:35 np0005532761 python3.9[110692]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:45:35 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : mgrmap e35: compute-0.oyehye(active, since 92s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 15:45:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:35 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:36 np0005532761 python3.9[110770]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 23 15:45:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 23 15:45:36 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 23 15:45:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:36 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:36.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:36.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:45:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:37 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1010 B/s wr, 2 op/s; 0 B/s, 1 objects/s recovering
Nov 23 15:45:37 np0005532761 python3.9[110949]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:45:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:37] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Nov 23 15:45:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:37] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Nov 23 15:45:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:37 : epoch 69237231 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:45:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:37 : epoch 69237231 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:45:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:37 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:38 : epoch 69237231 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:45:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:38 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:38.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:38 np0005532761 ceph-mgr[74869]: [dashboard INFO request] [192.168.122.100:40608] [POST] [200] [0.001s] [4.0B] [64798b44-b6ef-4520-989a-65487529fb0b] /api/prometheus_receiver
Nov 23 15:45:38 np0005532761 podman[111080]: 2025-11-23 20:45:38.964137146 +0000 UTC m=+0.066856141 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:45:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:39 np0005532761 podman[111080]: 2025-11-23 20:45:39.056944698 +0000 UTC m=+0.159663703 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:45:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:39.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 2.3 KiB/s wr, 7 op/s; 18 B/s, 1 objects/s recovering
Nov 23 15:45:39 np0005532761 podman[111221]: 2025-11-23 20:45:39.526122458 +0000 UTC m=+0.106951323 container exec c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:45:39 np0005532761 podman[111250]: 2025-11-23 20:45:39.590967648 +0000 UTC m=+0.049856795 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:45:39 np0005532761 podman[111221]: 2025-11-23 20:45:39.614733259 +0000 UTC m=+0.195562094 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:45:39 np0005532761 podman[111436]: 2025-11-23 20:45:39.907116294 +0000 UTC m=+0.043936297 container exec a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 15:45:39 np0005532761 podman[111436]: 2025-11-23 20:45:39.919119403 +0000 UTC m=+0.055939376 container exec_died a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:45:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:39 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:39 np0005532761 python3.9[111417]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:45:40 np0005532761 podman[111524]: 2025-11-23 20:45:40.178791829 +0000 UTC m=+0.126770817 container exec cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:45:40 np0005532761 podman[111546]: 2025-11-23 20:45:40.25295705 +0000 UTC m=+0.058874334 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:45:40 np0005532761 podman[111524]: 2025-11-23 20:45:40.296236376 +0000 UTC m=+0.244215364 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:45:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:40 np0005532761 podman[111645]: 2025-11-23 20:45:40.532585953 +0000 UTC m=+0.055941086 container exec 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 23 15:45:40 np0005532761 podman[111645]: 2025-11-23 20:45:40.546151759 +0000 UTC m=+0.069506862 container exec_died 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, distribution-scope=public, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, architecture=x86_64, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 23 15:45:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:40 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:45:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:40.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:45:40 np0005532761 python3.9[111781]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 15:45:40 np0005532761 podman[111785]: 2025-11-23 20:45:40.901716185 +0000 UTC m=+0.193482634 container exec 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:45:40 np0005532761 podman[111785]: 2025-11-23 20:45:40.925134747 +0000 UTC m=+0.216901196 container exec_died 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:45:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:41 : epoch 69237231 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:45:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:41 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:41 np0005532761 podman[111883]: 2025-11-23 20:45:41.324828055 +0000 UTC m=+0.211854045 container exec 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:45:41 np0005532761 podman[111883]: 2025-11-23 20:45:41.479999691 +0000 UTC m=+0.367025661 container exec_died 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:45:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.4 KiB/s wr, 5 op/s; 15 B/s, 0 objects/s recovering
Nov 23 15:45:41 np0005532761 python3.9[112039]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:45:41 np0005532761 podman[112145]: 2025-11-23 20:45:41.857598408 +0000 UTC m=+0.055978077 container exec 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:45:41 np0005532761 podman[112145]: 2025-11-23 20:45:41.892137502 +0000 UTC m=+0.090517161 container exec_died 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:45:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:41 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:45:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.4 KiB/s wr, 4 op/s; 15 B/s, 0 objects/s recovering
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:45:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:42 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:45:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:45:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:42.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:43 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:43 np0005532761 podman[112487]: 2025-11-23 20:45:43.296205444 +0000 UTC m=+0.089366506 container create a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:45:43 np0005532761 python3.9[112447]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:45:43 np0005532761 podman[112487]: 2025-11-23 20:45:43.226995852 +0000 UTC m=+0.020156974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:45:43 np0005532761 systemd[1]: Started libpod-conmon-a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230.scope.
Nov 23 15:45:43 np0005532761 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 15:45:43 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:45:43 np0005532761 podman[112487]: 2025-11-23 20:45:43.426904368 +0000 UTC m=+0.220065450 container init a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 15:45:43 np0005532761 podman[112487]: 2025-11-23 20:45:43.434205766 +0000 UTC m=+0.227366828 container start a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:45:43 np0005532761 podman[112487]: 2025-11-23 20:45:43.437262228 +0000 UTC m=+0.230423290 container attach a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:45:43 np0005532761 gifted_elion[112507]: 167 167
Nov 23 15:45:43 np0005532761 systemd[1]: libpod-a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230.scope: Deactivated successfully.
Nov 23 15:45:43 np0005532761 podman[112487]: 2025-11-23 20:45:43.439126243 +0000 UTC m=+0.232287335 container died a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:45:43 np0005532761 systemd[1]: tuned.service: Deactivated successfully.
Nov 23 15:45:43 np0005532761 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 15:45:43 np0005532761 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 15:45:43 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1cd222473966f8eba2ede9c71cd662375800c82be131e3cc2653a76cf6229ce6-merged.mount: Deactivated successfully.
Nov 23 15:45:43 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Nov 23 15:45:43 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 23 15:45:43 np0005532761 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 15:45:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:43 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:43 np0005532761 podman[112487]: 2025-11-23 20:45:43.953960019 +0000 UTC m=+0.747121081 container remove a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:45:43 np0005532761 systemd[1]: libpod-conmon-a24201d958573d5c977b9c72980ec66854d1322f2f181bcc51fb0fac89b9f230.scope: Deactivated successfully.
Nov 23 15:45:43 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:45:43 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:43 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:43 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:45:44 np0005532761 podman[112588]: 2025-11-23 20:45:44.081032025 +0000 UTC m=+0.024475215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:45:44 np0005532761 python3.9[112704]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 15:45:44 np0005532761 podman[112588]: 2025-11-23 20:45:44.500782163 +0000 UTC m=+0.444225333 container create 988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shaw, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 15:45:44 np0005532761 systemd[1]: Started libpod-conmon-988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f.scope.
Nov 23 15:45:44 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:45:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472ac5f28fb5caee40b8ee285f091bd1c01e9284759826089c1cbe7ace97dd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472ac5f28fb5caee40b8ee285f091bd1c01e9284759826089c1cbe7ace97dd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472ac5f28fb5caee40b8ee285f091bd1c01e9284759826089c1cbe7ace97dd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472ac5f28fb5caee40b8ee285f091bd1c01e9284759826089c1cbe7ace97dd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472ac5f28fb5caee40b8ee285f091bd1c01e9284759826089c1cbe7ace97dd3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s; 12 B/s, 0 objects/s recovering
Nov 23 15:45:44 np0005532761 podman[112588]: 2025-11-23 20:45:44.660591688 +0000 UTC m=+0.604034878 container init 988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:45:44 np0005532761 podman[112588]: 2025-11-23 20:45:44.668078592 +0000 UTC m=+0.611521762 container start 988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shaw, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:45:44 np0005532761 podman[112588]: 2025-11-23 20:45:44.672487214 +0000 UTC m=+0.615930404 container attach 988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shaw, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:45:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:44 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:44.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:44 np0005532761 adoring_shaw[112733]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:45:44 np0005532761 adoring_shaw[112733]: --> All data devices are unavailable
Nov 23 15:45:44 np0005532761 ceph-mon[74569]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Nov 23 15:45:44 np0005532761 ceph-mon[74569]: Cluster is now healthy
Nov 23 15:45:44 np0005532761 systemd[1]: libpod-988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f.scope: Deactivated successfully.
Nov 23 15:45:44 np0005532761 podman[112588]: 2025-11-23 20:45:44.99761978 +0000 UTC m=+0.941062960 container died 988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shaw, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:45:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d472ac5f28fb5caee40b8ee285f091bd1c01e9284759826089c1cbe7ace97dd3-merged.mount: Deactivated successfully.
Nov 23 15:45:45 np0005532761 podman[112588]: 2025-11-23 20:45:45.044956947 +0000 UTC m=+0.988400117 container remove 988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:45:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:45 np0005532761 systemd[1]: libpod-conmon-988f25d8f38a10a7924430fe3136faa3c92a4da344bc58a1a136f3e2b42e0f6f.scope: Deactivated successfully.
Nov 23 15:45:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:45:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:45:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:45 np0005532761 podman[112852]: 2025-11-23 20:45:45.539544207 +0000 UTC m=+0.030677419 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:45:45 np0005532761 podman[112852]: 2025-11-23 20:45:45.678791297 +0000 UTC m=+0.169924489 container create cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_austin, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:45:45 np0005532761 systemd[1]: Started libpod-conmon-cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300.scope.
Nov 23 15:45:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:45:45 np0005532761 podman[112852]: 2025-11-23 20:45:45.796240013 +0000 UTC m=+0.287373235 container init cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:45:45 np0005532761 podman[112852]: 2025-11-23 20:45:45.802253083 +0000 UTC m=+0.293386275 container start cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_austin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:45:45 np0005532761 wizardly_austin[112869]: 167 167
Nov 23 15:45:45 np0005532761 systemd[1]: libpod-cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300.scope: Deactivated successfully.
Nov 23 15:45:45 np0005532761 podman[112852]: 2025-11-23 20:45:45.871138826 +0000 UTC m=+0.362272038 container attach cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:45:45 np0005532761 podman[112852]: 2025-11-23 20:45:45.871724554 +0000 UTC m=+0.362857746 container died cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:45:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:45 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8dc32f3fe672ce99f37c9fb3a027a16cde6f66ee13867b9be345ac1e061b2746-merged.mount: Deactivated successfully.
Nov 23 15:45:46 np0005532761 podman[112852]: 2025-11-23 20:45:46.124252695 +0000 UTC m=+0.615385927 container remove cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:45:46 np0005532761 systemd[1]: libpod-conmon-cc6832b9dfb731b7b859c7a7e1a3dd4538ac91c2faa30f6ab6cbbfafa1438300.scope: Deactivated successfully.
Nov 23 15:45:46 np0005532761 podman[112895]: 2025-11-23 20:45:46.252950749 +0000 UTC m=+0.021814765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:45:46 np0005532761 podman[112895]: 2025-11-23 20:45:46.458209235 +0000 UTC m=+0.227073271 container create a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:45:46 np0005532761 systemd[1]: Started libpod-conmon-a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501.scope.
Nov 23 15:45:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:45:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d0a09eded1ae6449d7f022c0cc9691441ba85695f090917b144641d3240ae7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d0a09eded1ae6449d7f022c0cc9691441ba85695f090917b144641d3240ae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d0a09eded1ae6449d7f022c0cc9691441ba85695f090917b144641d3240ae7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d0a09eded1ae6449d7f022c0cc9691441ba85695f090917b144641d3240ae7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 10 B/s, 0 objects/s recovering
Nov 23 15:45:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:46 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:46 np0005532761 podman[112895]: 2025-11-23 20:45:46.732042694 +0000 UTC m=+0.500906790 container init a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hellman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:45:46 np0005532761 podman[112895]: 2025-11-23 20:45:46.745860398 +0000 UTC m=+0.514724404 container start a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:45:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000030s ======
Nov 23 15:45:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:46.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 23 15:45:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:46.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:45:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:46.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:46.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:46 np0005532761 podman[112895]: 2025-11-23 20:45:46.990567325 +0000 UTC m=+0.759431351 container attach a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:45:47 np0005532761 loving_hellman[112912]: {
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:    "1": [
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:        {
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "devices": [
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "/dev/loop3"
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            ],
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "lv_name": "ceph_lv0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "lv_size": "21470642176",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "name": "ceph_lv0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "tags": {
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.cluster_name": "ceph",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.crush_device_class": "",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.encrypted": "0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.osd_id": "1",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.type": "block",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.vdo": "0",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:                "ceph.with_tpm": "0"
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            },
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "type": "block",
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:            "vg_name": "ceph_vg0"
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:        }
Nov 23 15:45:47 np0005532761 loving_hellman[112912]:    ]
Nov 23 15:45:47 np0005532761 loving_hellman[112912]: }
Nov 23 15:45:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:47 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:47.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:47 np0005532761 systemd[1]: libpod-a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501.scope: Deactivated successfully.
Nov 23 15:45:47 np0005532761 podman[112895]: 2025-11-23 20:45:47.090534078 +0000 UTC m=+0.859398084 container died a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:45:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-55d0a09eded1ae6449d7f022c0cc9691441ba85695f090917b144641d3240ae7-merged.mount: Deactivated successfully.
Nov 23 15:45:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:47] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Nov 23 15:45:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:47] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Nov 23 15:45:47 np0005532761 podman[112895]: 2025-11-23 20:45:47.893289666 +0000 UTC m=+1.662153682 container remove a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hellman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 15:45:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:47 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:47 np0005532761 systemd[1]: libpod-conmon-a951c6a0c0034247bef9c50fe87343f9e2f06328260982ab443b15c0ccaf8501.scope: Deactivated successfully.
Nov 23 15:45:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204548 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:45:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 23 15:45:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:45:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:45:48 np0005532761 podman[113155]: 2025-11-23 20:45:48.410275416 +0000 UTC m=+0.019760233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:45:48 np0005532761 python3.9[113139]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:45:48 np0005532761 podman[113155]: 2025-11-23 20:45:48.637628353 +0000 UTC m=+0.247113150 container create a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:45:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1012 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Nov 23 15:45:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:48 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:48 np0005532761 systemd[1]: Started libpod-conmon-a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99.scope.
Nov 23 15:45:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:48.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:48 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:45:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:48.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:45:48 np0005532761 podman[113155]: 2025-11-23 20:45:48.878794044 +0000 UTC m=+0.488278861 container init a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:45:48 np0005532761 podman[113155]: 2025-11-23 20:45:48.886579288 +0000 UTC m=+0.496064085 container start a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_snyder, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 15:45:48 np0005532761 brave_snyder[113242]: 167 167
Nov 23 15:45:48 np0005532761 systemd[1]: libpod-a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99.scope: Deactivated successfully.
Nov 23 15:45:48 np0005532761 podman[113155]: 2025-11-23 20:45:48.903644609 +0000 UTC m=+0.513129406 container attach a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_snyder, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 15:45:48 np0005532761 podman[113155]: 2025-11-23 20:45:48.904596618 +0000 UTC m=+0.514081425 container died a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:45:48 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4b5717ae06a8b60eb11a4acf49e07b0c51ff5063d6154c690d6a353f9ea30831-merged.mount: Deactivated successfully.
Nov 23 15:45:49 np0005532761 podman[113155]: 2025-11-23 20:45:49.046128935 +0000 UTC m=+0.655613732 container remove a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_snyder, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:45:49 np0005532761 systemd[1]: libpod-conmon-a3e228939a6321bfb3f70c6a656d817c7829e1d7b86f5313b37e6466432e6d99.scope: Deactivated successfully.
Nov 23 15:45:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:49 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:49 np0005532761 podman[113353]: 2025-11-23 20:45:49.247279839 +0000 UTC m=+0.100492190 container create f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_einstein, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:45:49 np0005532761 podman[113353]: 2025-11-23 20:45:49.171635624 +0000 UTC m=+0.024848055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:45:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:49 np0005532761 python3.9[113345]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:45:49 np0005532761 systemd[1]: Started libpod-conmon-f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923.scope.
Nov 23 15:45:49 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:45:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f182482d972f10060ba044144159ae994871022d8a8259d151d37cad046d5e46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f182482d972f10060ba044144159ae994871022d8a8259d151d37cad046d5e46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f182482d972f10060ba044144159ae994871022d8a8259d151d37cad046d5e46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f182482d972f10060ba044144159ae994871022d8a8259d151d37cad046d5e46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:45:49 np0005532761 podman[113353]: 2025-11-23 20:45:49.427469534 +0000 UTC m=+0.280681965 container init f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:45:49 np0005532761 podman[113353]: 2025-11-23 20:45:49.436690451 +0000 UTC m=+0.289902832 container start f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:45:49 np0005532761 podman[113353]: 2025-11-23 20:45:49.532420857 +0000 UTC m=+0.385633228 container attach f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_einstein, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:45:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:49 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:50 np0005532761 lvm[113469]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:45:50 np0005532761 lvm[113469]: VG ceph_vg0 finished
Nov 23 15:45:50 np0005532761 gallant_einstein[113370]: {}
Nov 23 15:45:50 np0005532761 systemd[1]: libpod-f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923.scope: Deactivated successfully.
Nov 23 15:45:50 np0005532761 podman[113353]: 2025-11-23 20:45:50.264653012 +0000 UTC m=+1.117865363 container died f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_einstein, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:45:50 np0005532761 systemd[1]: libpod-f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923.scope: Consumed 1.075s CPU time.
Nov 23 15:45:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 92 B/s wr, 0 op/s
Nov 23 15:45:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:50 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000030s ======
Nov 23 15:45:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:50.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 23 15:45:50 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f182482d972f10060ba044144159ae994871022d8a8259d151d37cad046d5e46-merged.mount: Deactivated successfully.
Nov 23 15:45:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:51 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:51 np0005532761 systemd[1]: session-39.scope: Deactivated successfully.
Nov 23 15:45:51 np0005532761 systemd[1]: session-39.scope: Consumed 1min 1.353s CPU time.
Nov 23 15:45:51 np0005532761 systemd-logind[820]: Session 39 logged out. Waiting for processes to exit.
Nov 23 15:45:51 np0005532761 systemd-logind[820]: Removed session 39.
Nov 23 15:45:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:51 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:51 np0005532761 podman[113353]: 2025-11-23 20:45:51.990992195 +0000 UTC m=+2.844204556 container remove f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_einstein, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:45:52 np0005532761 systemd[1]: libpod-conmon-f5c9dba832463ca221473237313e2680ea7f6ebfffadd061dea3de8e0c797923.scope: Deactivated successfully.
Nov 23 15:45:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:45:52 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:45:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 92 B/s wr, 0 op/s
Nov 23 15:45:52 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:52 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:52.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:53 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:53 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:53 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:53 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:45:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:54 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:54.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:55 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:55.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:55 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:56 np0005532761 systemd-logind[820]: New session 40 of user zuul.
Nov 23 15:45:56 np0005532761 systemd[1]: Started Session 40 of User zuul.
Nov 23 15:45:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:45:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:56 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:45:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:45:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:56.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:56.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:45:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:57 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:57 np0005532761 python3.9[113695]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:45:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:57] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:45:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:45:57] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:45:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:57 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:45:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:58 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:58 np0005532761 python3.9[113852]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 23 15:45:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:45:58.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:45:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:45:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:58.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:45:58 np0005532761 ceph-mgr[74869]: [dashboard INFO request] [192.168.122.100:38260] [POST] [200] [0.002s] [4.0B] [ac95de46-3385-403a-936f-61e74d1621e2] /api/prometheus_receiver
Nov 23 15:45:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:59 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:45:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:45:59 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:59 np0005532761 python3.9[114006]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:46:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:00 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:00 np0005532761 python3.9[114091]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:46:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:01 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:01 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:02 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:46:03
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:46:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:03 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'images', '.nfs', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:46:03 np0005532761 python3.9[114248]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:46:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:46:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:46:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:46:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:03 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:46:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:04 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398003ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:04.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:05 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:05 np0005532761 python3.9[114404]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:46:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:05 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:06 np0005532761 python3.9[114557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:06 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:06.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:46:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:07 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398004010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:07.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:07 np0005532761 python3.9[114713]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 23 15:46:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:07] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:46:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:07] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:46:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:07 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:08 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:08 np0005532761 python3.9[114864]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:09 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:09 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398004030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:10 np0005532761 python3.9[115023]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:46:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:10 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000030s ======
Nov 23 15:46:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 23 15:46:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:11 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:11.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:11 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:12 np0005532761 python3.9[115179]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:12 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:13 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:13.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:13 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:46:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:14 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:14 np0005532761 python3.9[115468]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:46:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:14.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:15 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:15.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:15 np0005532761 python3.9[115619]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:15 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2398004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:16 np0005532761 python3.9[115773]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:16 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:46:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:16.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:46:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:16.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:46:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:16.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:46:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:17 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:17.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:17] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:46:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:17] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:46:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:17 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:46:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:46:18 np0005532761 python3.9[115955]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:18 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:18.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:19 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:19.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:19 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:46:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:20 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:20 np0005532761 python3.9[116111]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000030s ======
Nov 23 15:46:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:20.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 23 15:46:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:21 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:46:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:21.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:46:21 np0005532761 python3.9[116266]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 23 15:46:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:21 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:22 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:22.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:23 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c0041a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:23.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:23 np0005532761 systemd[1]: session-40.scope: Deactivated successfully.
Nov 23 15:46:23 np0005532761 systemd[1]: session-40.scope: Consumed 17.581s CPU time.
Nov 23 15:46:23 np0005532761 systemd-logind[820]: Session 40 logged out. Waiting for processes to exit.
Nov 23 15:46:23 np0005532761 systemd-logind[820]: Removed session 40.
Nov 23 15:46:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:23 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:46:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:24 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:25 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:25.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:25 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c0041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:26 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:46:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:46:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:26.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:46:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:27 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:27.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:27] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 23 15:46:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:27] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 23 15:46:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:27 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2390003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:28 np0005532761 systemd-logind[820]: New session 41 of user zuul.
Nov 23 15:46:28 np0005532761 systemd[1]: Started Session 41 of User zuul.
Nov 23 15:46:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:28 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c0041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:28.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:29 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:29.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:29 np0005532761 python3.9[116452]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:29 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:46:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:30 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:30 np0005532761 python3.9[116607]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:30.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:31 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f239c004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 23 15:46:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:31.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 23 15:46:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:31 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:32 np0005532761 python3.9[116803]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[106859]: 23/11/2025 20:46:32 : epoch 69237231 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80091b0 fd 38 proxy ignored for local
Nov 23 15:46:32 np0005532761 kernel: ganesha.nfsd[113171]: segfault at 50 ip 00007f24732fd32e sp 00007f242b7fd210 error 4 in libntirpc.so.5.8[7f24732e2000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 15:46:32 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:46:32 np0005532761 systemd[1]: Started Process Core Dump (PID 116830/UID 0).
Nov 23 15:46:32 np0005532761 systemd[1]: session-41.scope: Deactivated successfully.
Nov 23 15:46:32 np0005532761 systemd[1]: session-41.scope: Consumed 2.135s CPU time.
Nov 23 15:46:32 np0005532761 systemd-logind[820]: Session 41 logged out. Waiting for processes to exit.
Nov 23 15:46:32 np0005532761 systemd-logind[820]: Removed session 41.
Nov 23 15:46:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:32.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:46:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:46:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:33.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:46:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:46:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:46:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:46:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:46:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:46:33 np0005532761 systemd-coredump[116831]: Process 106866 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 60:#012#0  0x00007f24732fd32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:46:33 np0005532761 systemd[1]: systemd-coredump@1-116830-0.service: Deactivated successfully.
Nov 23 15:46:33 np0005532761 systemd[1]: systemd-coredump@1-116830-0.service: Consumed 1.065s CPU time.
Nov 23 15:46:33 np0005532761 podman[116837]: 2025-11-23 20:46:33.947966517 +0000 UTC m=+0.044653240 container died a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:46:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-fa5f1b079a02b35a3f6878250c82d5647ee5807e4f4195ff56a817f6f3a3ba75-merged.mount: Deactivated successfully.
Nov 23 15:46:34 np0005532761 podman[116837]: 2025-11-23 20:46:34.002345665 +0000 UTC m=+0.099032358 container remove a351101f97a91301f60431273089d1cb144be70a9ff3a2486218b44ecd73f0be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:46:34 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:46:34 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:46:34 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.508s CPU time.
Nov 23 15:46:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:46:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:34.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:35.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:36.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:46:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:37.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:37 np0005532761 systemd-logind[820]: New session 42 of user zuul.
Nov 23 15:46:37 np0005532761 systemd[1]: Started Session 42 of User zuul.
Nov 23 15:46:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:37] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 23 15:46:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:37] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 23 15:46:38 np0005532761 python3.9[117065]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204638 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:46:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:38.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:39.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:39 np0005532761 python3.9[117221]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:46:40 np0005532761 python3.9[117378]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:40.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:41.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:41 np0005532761 python3.9[117463]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:46:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:43.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:43 np0005532761 python3.9[117622]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:44 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 2.
Nov 23 15:46:44 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:46:44 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.508s CPU time.
Nov 23 15:46:44 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:46:44 np0005532761 podman[117742]: 2025-11-23 20:46:44.449267421 +0000 UTC m=+0.045764709 container create afbd261f1081322f780a88f013ad53c885ceea79d632c52c83035d5780fc8073 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:46:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686e2ed13e4739f4c51c24a9184656442faf40805bd271380bc7bb2349d7003/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686e2ed13e4739f4c51c24a9184656442faf40805bd271380bc7bb2349d7003/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686e2ed13e4739f4c51c24a9184656442faf40805bd271380bc7bb2349d7003/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686e2ed13e4739f4c51c24a9184656442faf40805bd271380bc7bb2349d7003/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:44 np0005532761 podman[117742]: 2025-11-23 20:46:44.512050116 +0000 UTC m=+0.108547314 container init afbd261f1081322f780a88f013ad53c885ceea79d632c52c83035d5780fc8073 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:46:44 np0005532761 podman[117742]: 2025-11-23 20:46:44.516577666 +0000 UTC m=+0.113074844 container start afbd261f1081322f780a88f013ad53c885ceea79d632c52c83035d5780fc8073 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:46:44 np0005532761 bash[117742]: afbd261f1081322f780a88f013ad53c885ceea79d632c52c83035d5780fc8073
Nov 23 15:46:44 np0005532761 podman[117742]: 2025-11-23 20:46:44.42368375 +0000 UTC m=+0.020180958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:46:44 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:46:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:46:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:46:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:45 np0005532761 python3.9[117928]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:46:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:46 np0005532761 python3.9[118080]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:46:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:46.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:46.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:46:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:47.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:47 np0005532761 python3.9[118247]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:46:47 np0005532761 python3.9[118325]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:46:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:47] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 23 15:46:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:47] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 23 15:46:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:46:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:46:48 np0005532761 python3.9[118478]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:46:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:46:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:48.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:49 np0005532761 python3.9[118556]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:50 np0005532761 python3.9[118710]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:50 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:46:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:50 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:46:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 15:46:50 np0005532761 python3.9[118864]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:51.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:51 np0005532761 python3.9[119017]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:52 np0005532761 python3.9[119169]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 15:46:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:53 np0005532761 python3.9[119380]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:46:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:46:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:46:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:46:54 np0005532761 podman[119494]: 2025-11-23 20:46:54.431366654 +0000 UTC m=+0.040157790 container create 19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:46:54 np0005532761 systemd[1]: Started libpod-conmon-19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c.scope.
Nov 23 15:46:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:46:54 np0005532761 podman[119494]: 2025-11-23 20:46:54.501736039 +0000 UTC m=+0.110527195 container init 19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:46:54 np0005532761 podman[119494]: 2025-11-23 20:46:54.412545477 +0000 UTC m=+0.021336633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:46:54 np0005532761 podman[119494]: 2025-11-23 20:46:54.50859199 +0000 UTC m=+0.117383126 container start 19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 15:46:54 np0005532761 exciting_leavitt[119511]: 167 167
Nov 23 15:46:54 np0005532761 systemd[1]: libpod-19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c.scope: Deactivated successfully.
Nov 23 15:46:54 np0005532761 podman[119494]: 2025-11-23 20:46:54.514033723 +0000 UTC m=+0.122824879 container attach 19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:46:54 np0005532761 podman[119494]: 2025-11-23 20:46:54.51428337 +0000 UTC m=+0.123074506 container died 19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:46:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c4e8792d6e13c673ff630daa5db4ff18ea2b3f9d6eacdddcd6951e52572b2ca0-merged.mount: Deactivated successfully.
Nov 23 15:46:54 np0005532761 podman[119494]: 2025-11-23 20:46:54.552986651 +0000 UTC m=+0.161777787 container remove 19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:46:54 np0005532761 systemd[1]: libpod-conmon-19bac717b9441f7c6b2b622d61811f088268e5a261c3808805ea03e56d976a9c.scope: Deactivated successfully.
Nov 23 15:46:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:46:54 np0005532761 podman[119534]: 2025-11-23 20:46:54.687537838 +0000 UTC m=+0.042552294 container create c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:46:54 np0005532761 systemd[1]: Started libpod-conmon-c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1.scope.
Nov 23 15:46:54 np0005532761 podman[119534]: 2025-11-23 20:46:54.670164609 +0000 UTC m=+0.025179085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:46:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:46:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f951fb304f128861b0f4e4b1cd588e1e528cc8a36120ce241b4980808a1cb4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f951fb304f128861b0f4e4b1cd588e1e528cc8a36120ce241b4980808a1cb4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f951fb304f128861b0f4e4b1cd588e1e528cc8a36120ce241b4980808a1cb4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f951fb304f128861b0f4e4b1cd588e1e528cc8a36120ce241b4980808a1cb4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:54 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f951fb304f128861b0f4e4b1cd588e1e528cc8a36120ce241b4980808a1cb4c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:54 np0005532761 podman[119534]: 2025-11-23 20:46:54.790197524 +0000 UTC m=+0.145211980 container init c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldstine, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:46:54 np0005532761 podman[119534]: 2025-11-23 20:46:54.800369682 +0000 UTC m=+0.155384148 container start c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 15:46:54 np0005532761 podman[119534]: 2025-11-23 20:46:54.804301816 +0000 UTC m=+0.159316332 container attach c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 15:46:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:54.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:55 np0005532761 eloquent_goldstine[119550]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:46:55 np0005532761 eloquent_goldstine[119550]: --> All data devices are unavailable
Nov 23 15:46:55 np0005532761 systemd[1]: libpod-c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1.scope: Deactivated successfully.
Nov 23 15:46:55 np0005532761 podman[119534]: 2025-11-23 20:46:55.106004741 +0000 UTC m=+0.461019207 container died c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:46:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f951fb304f128861b0f4e4b1cd588e1e528cc8a36120ce241b4980808a1cb4c0-merged.mount: Deactivated successfully.
Nov 23 15:46:55 np0005532761 podman[119534]: 2025-11-23 20:46:55.157378574 +0000 UTC m=+0.512393030 container remove c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:46:55 np0005532761 systemd[1]: libpod-conmon-c9c25253185023bca57ffac6fc76b70494ea56846dcaaefeeed4e897cc9376f1.scope: Deactivated successfully.
Nov 23 15:46:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:55 np0005532761 podman[119820]: 2025-11-23 20:46:55.671554491 +0000 UTC m=+0.035530618 container create c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:46:55 np0005532761 systemd[1]: Started libpod-conmon-c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254.scope.
Nov 23 15:46:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:46:55 np0005532761 podman[119820]: 2025-11-23 20:46:55.750977404 +0000 UTC m=+0.114953551 container init c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_dhawan, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:46:55 np0005532761 podman[119820]: 2025-11-23 20:46:55.657176231 +0000 UTC m=+0.021152378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:46:55 np0005532761 podman[119820]: 2025-11-23 20:46:55.757474736 +0000 UTC m=+0.121450863 container start c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_dhawan, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:46:55 np0005532761 priceless_dhawan[119836]: 167 167
Nov 23 15:46:55 np0005532761 systemd[1]: libpod-c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254.scope: Deactivated successfully.
Nov 23 15:46:55 np0005532761 podman[119820]: 2025-11-23 20:46:55.764370698 +0000 UTC m=+0.128346845 container attach c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 15:46:55 np0005532761 podman[119820]: 2025-11-23 20:46:55.764724947 +0000 UTC m=+0.128701064 container died c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:46:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c910bc2d512a07cf50b1a3236818544b546f4ad0862bd32cfbf64c1d721c2e7a-merged.mount: Deactivated successfully.
Nov 23 15:46:55 np0005532761 podman[119820]: 2025-11-23 20:46:55.80768059 +0000 UTC m=+0.171656717 container remove c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_dhawan, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:46:55 np0005532761 systemd[1]: libpod-conmon-c79313a0cd8fc504adf6087433a9069ad7c3b1fd8515d32adf6d60c311761254.scope: Deactivated successfully.
Nov 23 15:46:55 np0005532761 python3.9[119817]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:55 np0005532761 podman[119861]: 2025-11-23 20:46:55.954171641 +0000 UTC m=+0.039731948 container create 0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 15:46:56 np0005532761 systemd[1]: Started libpod-conmon-0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8.scope.
Nov 23 15:46:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:46:56 np0005532761 podman[119861]: 2025-11-23 20:46:55.936880686 +0000 UTC m=+0.022441023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:46:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f89e21b7d808f09abd1575a99cd6c6cd54848f6a27a16d9878f52375ee1d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f89e21b7d808f09abd1575a99cd6c6cd54848f6a27a16d9878f52375ee1d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f89e21b7d808f09abd1575a99cd6c6cd54848f6a27a16d9878f52375ee1d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f89e21b7d808f09abd1575a99cd6c6cd54848f6a27a16d9878f52375ee1d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:56 np0005532761 podman[119861]: 2025-11-23 20:46:56.050519412 +0000 UTC m=+0.136079739 container init 0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:46:56 np0005532761 podman[119861]: 2025-11-23 20:46:56.056345585 +0000 UTC m=+0.141905912 container start 0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:46:56 np0005532761 podman[119861]: 2025-11-23 20:46:56.060284529 +0000 UTC m=+0.145844856 container attach 0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:46:56 np0005532761 angry_margulis[119901]: {
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:    "1": [
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:        {
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "devices": [
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "/dev/loop3"
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            ],
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "lv_name": "ceph_lv0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "lv_size": "21470642176",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "name": "ceph_lv0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "tags": {
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.cluster_name": "ceph",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.crush_device_class": "",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.encrypted": "0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.osd_id": "1",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.type": "block",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.vdo": "0",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:                "ceph.with_tpm": "0"
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            },
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "type": "block",
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:            "vg_name": "ceph_vg0"
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:        }
Nov 23 15:46:56 np0005532761 angry_margulis[119901]:    ]
Nov 23 15:46:56 np0005532761 angry_margulis[119901]: }
Nov 23 15:46:56 np0005532761 systemd[1]: libpod-0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8.scope: Deactivated successfully.
Nov 23 15:46:56 np0005532761 podman[119861]: 2025-11-23 20:46:56.366197305 +0000 UTC m=+0.451757612 container died 0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 15:46:56 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2c1f89e21b7d808f09abd1575a99cd6c6cd54848f6a27a16d9878f52375ee1d1-merged.mount: Deactivated successfully.
Nov 23 15:46:56 np0005532761 podman[119861]: 2025-11-23 20:46:56.432453501 +0000 UTC m=+0.518013808 container remove 0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:46:56 np0005532761 systemd[1]: libpod-conmon-0c885844f6c97f415f0f00e4f1c067baed083d5171623acce2f717b609e8f9e8.scope: Deactivated successfully.
Nov 23 15:46:56 np0005532761 python3.9[120060]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:46:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:56 np0005532761 podman[120184]: 2025-11-23 20:46:56.923709963 +0000 UTC m=+0.038794004 container create 61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 15:46:56 np0005532761 systemd[1]: Started libpod-conmon-61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d.scope.
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:56.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:46:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:46:56.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:46:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:46:57 np0005532761 podman[120184]: 2025-11-23 20:46:56.906617113 +0000 UTC m=+0.021701184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:46:57 np0005532761 podman[120184]: 2025-11-23 20:46:57.008780986 +0000 UTC m=+0.123865067 container init 61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_beaver, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 15:46:57 np0005532761 podman[120184]: 2025-11-23 20:46:57.015243746 +0000 UTC m=+0.130327797 container start 61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:46:57 np0005532761 inspiring_beaver[120200]: 167 167
Nov 23 15:46:57 np0005532761 systemd[1]: libpod-61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d.scope: Deactivated successfully.
Nov 23 15:46:57 np0005532761 podman[120184]: 2025-11-23 20:46:57.021195024 +0000 UTC m=+0.136279105 container attach 61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_beaver, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:46:57 np0005532761 podman[120184]: 2025-11-23 20:46:57.021497512 +0000 UTC m=+0.136581563 container died 61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_beaver, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 15:46:57 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b9fe08410641ff8ec50682edd1afd1568d8f6222de25c9df8a7beb9d9a388e45-merged.mount: Deactivated successfully.
Nov 23 15:46:57 np0005532761 podman[120184]: 2025-11-23 20:46:57.079775847 +0000 UTC m=+0.194859898 container remove 61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:46:57 np0005532761 systemd[1]: libpod-conmon-61dd29940f3dcf2476e25cde69502162da5c8cd4e810ebc536d939dfc45b951d.scope: Deactivated successfully.
Nov 23 15:46:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:57 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d60000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:57 np0005532761 podman[120249]: 2025-11-23 20:46:57.221083063 +0000 UTC m=+0.042877851 container create 045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_jepsen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:46:57 np0005532761 systemd[1]: Started libpod-conmon-045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b.scope.
Nov 23 15:46:57 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:46:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae88e21f6e1886b09f5840dd57e5cfcd6f11a81cd793eeb935046f8c46f384cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae88e21f6e1886b09f5840dd57e5cfcd6f11a81cd793eeb935046f8c46f384cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae88e21f6e1886b09f5840dd57e5cfcd6f11a81cd793eeb935046f8c46f384cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae88e21f6e1886b09f5840dd57e5cfcd6f11a81cd793eeb935046f8c46f384cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:46:57 np0005532761 podman[120249]: 2025-11-23 20:46:57.295338851 +0000 UTC m=+0.117133639 container init 045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:46:57 np0005532761 podman[120249]: 2025-11-23 20:46:57.204371302 +0000 UTC m=+0.026166100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:46:57 np0005532761 podman[120249]: 2025-11-23 20:46:57.302238633 +0000 UTC m=+0.124033411 container start 045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_jepsen, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:46:57 np0005532761 podman[120249]: 2025-11-23 20:46:57.30591697 +0000 UTC m=+0.127711748 container attach 045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_jepsen, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 15:46:57 np0005532761 python3.9[120403]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:57] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 23 15:46:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:46:57] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 23 15:46:57 np0005532761 lvm[120491]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:46:57 np0005532761 lvm[120491]: VG ceph_vg0 finished
Nov 23 15:46:57 np0005532761 objective_jepsen[120308]: {}
Nov 23 15:46:57 np0005532761 podman[120249]: 2025-11-23 20:46:57.949889048 +0000 UTC m=+0.771683846 container died 045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:46:57 np0005532761 systemd[1]: libpod-045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b.scope: Deactivated successfully.
Nov 23 15:46:57 np0005532761 systemd[1]: libpod-045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b.scope: Consumed 1.031s CPU time.
Nov 23 15:46:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:58 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ae88e21f6e1886b09f5840dd57e5cfcd6f11a81cd793eeb935046f8c46f384cc-merged.mount: Deactivated successfully.
Nov 23 15:46:58 np0005532761 podman[120249]: 2025-11-23 20:46:58.111411706 +0000 UTC m=+0.933206484 container remove 045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 15:46:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:46:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:46:58 np0005532761 systemd[1]: libpod-conmon-045cbd905d7f7911fa4ae96d64ac9b186514559734515199fb8ed3f9cf653a0b.scope: Deactivated successfully.
Nov 23 15:46:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:58 np0005532761 python3.9[120661]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:46:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:58 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54001550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:58.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:46:59 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48000f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:46:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:59.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:59 np0005532761 python3.9[120815]: ansible-service_facts Invoked
Nov 23 15:46:59 np0005532761 network[120832]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:46:59 np0005532761 network[120833]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:46:59 np0005532761 network[120834]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:47:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:00 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:47:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204700 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:47:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:00 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:00.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:01 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:01.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:02 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:47:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:02 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:47:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:47:03
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.nfs', 'backups', 'volumes', 'default.rgw.meta']
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:47:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:03 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:47:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:47:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:47:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:03.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:47:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:47:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:04 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:47:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:04 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48001ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:04.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:05 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38002140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:05.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:06 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:06 np0005532761 python3.9[121293]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:47:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:47:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:06 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:06.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:06.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:47:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:07 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48002480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:07.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:07] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:47:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:07] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:47:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:08 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38002140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:47:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:08 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:08 np0005532761 python3.9[121451]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 23 15:47:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:08.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:09 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:09.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:10 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48002480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:10 np0005532761 python3.9[121604]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:47:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:10 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38002140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:10 np0005532761 python3.9[121683]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:11 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:11.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:12 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:12 np0005532761 python3.9[121836]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:12 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48002480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:12 np0005532761 python3.9[121915]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:12.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:13 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48002480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:13.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:14 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:47:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:14 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:14 np0005532761 python3.9[122071]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:14.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:15 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:16 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:16 np0005532761 python3.9[122224]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:47:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:16 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:16.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:47:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:16.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:47:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:17 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:17.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:17 np0005532761 python3.9[122335]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:47:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:17] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:47:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:17] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:47:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:18 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:47:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:47:18 np0005532761 systemd[1]: session-42.scope: Deactivated successfully.
Nov 23 15:47:18 np0005532761 systemd[1]: session-42.scope: Consumed 22.350s CPU time.
Nov 23 15:47:18 np0005532761 systemd-logind[820]: Session 42 logged out. Waiting for processes to exit.
Nov 23 15:47:18 np0005532761 systemd-logind[820]: Removed session 42.
Nov 23 15:47:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:18 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:19 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:19.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:20 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:47:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:20 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:20.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:21 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:21.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:22 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:22 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:22.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:23 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:24 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:24 np0005532761 systemd-logind[820]: New session 43 of user zuul.
Nov 23 15:47:24 np0005532761 systemd[1]: Started Session 43 of User zuul.
Nov 23 15:47:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:47:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:24 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:24 np0005532761 python3.9[122524]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:25 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:25 np0005532761 python3.9[122677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:26 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:26 np0005532761 python3.9[122755]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:26 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:26 np0005532761 systemd[1]: session-43.scope: Deactivated successfully.
Nov 23 15:47:26 np0005532761 systemd[1]: session-43.scope: Consumed 1.453s CPU time.
Nov 23 15:47:26 np0005532761 systemd-logind[820]: Session 43 logged out. Waiting for processes to exit.
Nov 23 15:47:26 np0005532761 systemd-logind[820]: Removed session 43.
Nov 23 15:47:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:26.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:47:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:26.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:47:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:27 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d440023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:27] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 23 15:47:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:27] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 23 15:47:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:28 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:28 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:28.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:29 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:29.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:30 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d30000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:47:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:30 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d54002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:30.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:31 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:31.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:32 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:32 np0005532761 systemd-logind[820]: New session 44 of user zuul.
Nov 23 15:47:32 np0005532761 systemd[1]: Started Session 44 of User zuul.
Nov 23 15:47:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:32 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d300016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:32.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:33 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:47:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:47:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:47:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:47:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:47:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:47:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:47:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:47:33 np0005532761 python3.9[122945]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:47:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:33.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:34 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:34 np0005532761 python3.9[123101]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:47:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:34 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:34.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:35 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d300016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:35.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:35 np0005532761 python3.9[123278]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:35 np0005532761 python3.9[123356]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.eh4biwfy recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:36 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:36 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:36 np0005532761 python3.9[123509]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:36.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:47:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:37 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:37.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:37 np0005532761 python3.9[123588]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.pyfpz5h8 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:37] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 23 15:47:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:37] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 23 15:47:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:38 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:38 np0005532761 python3.9[123767]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:47:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:38 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:38.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:39 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:39 np0005532761 python3.9[123921]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:39.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:39 np0005532761 python3.9[123999]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:47:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:40 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:40 np0005532761 python3.9[124151]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:47:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:40 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:40 np0005532761 python3.9[124230]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:47:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:40.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:41 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:41 np0005532761 python3.9[124383]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:42 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:42 np0005532761 python3.9[124536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:42 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:42.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:43 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:43 np0005532761 python3.9[124615]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:43 np0005532761 systemd[91608]: Created slice User Background Tasks Slice.
Nov 23 15:47:43 np0005532761 systemd[91608]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 15:47:43 np0005532761 systemd[91608]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 15:47:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:44 np0005532761 python3.9[124770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:44 np0005532761 python3.9[124849]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:47:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:44.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:45 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:45.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:46 np0005532761 python3.9[125002]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:47:46 np0005532761 systemd[1]: Reloading.
Nov 23 15:47:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:46 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:46 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:47:46 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:47:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:46 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:46.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:46.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:47:47 np0005532761 python3.9[125193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:47 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:47 np0005532761 python3.9[125272]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:47] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 23 15:47:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:47] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 23 15:47:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:48 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:47:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:47:48 np0005532761 python3.9[125424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:48 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d30002ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:48 np0005532761 python3.9[125503]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:48.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:49 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:49.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:49 np0005532761 python3.9[125656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:47:49 np0005532761 systemd[1]: Reloading.
Nov 23 15:47:49 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:47:49 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:47:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:50 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:50 np0005532761 systemd[1]: Starting Create netns directory...
Nov 23 15:47:50 np0005532761 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:47:50 np0005532761 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:47:50 np0005532761 systemd[1]: Finished Create netns directory.
Nov 23 15:47:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:47:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:50 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:51 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d300034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:51.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:51 np0005532761 python3.9[125850]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:47:51 np0005532761 network[125867]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:47:51 np0005532761 network[125868]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:47:51 np0005532761 network[125869]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:47:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:52 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:52 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:52.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:53 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:53.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:54 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d300034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:47:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:54 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:55 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:55.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d300034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:56.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:47:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:47:56.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:47:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:57 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:57.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:57] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 23 15:47:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:47:57] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 23 15:47:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:58 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d50003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:58 np0005532761 python3.9[126164]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:47:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:58 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:58 np0005532761 python3.9[126303]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:47:59 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d300034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:47:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:47:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:47:59 np0005532761 podman[126572]: 2025-11-23 20:47:59.869436694 +0000 UTC m=+0.036828207 container create eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:47:59 np0005532761 systemd[1]: Started libpod-conmon-eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183.scope.
Nov 23 15:47:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:47:59 np0005532761 podman[126572]: 2025-11-23 20:47:59.941508052 +0000 UTC m=+0.108899585 container init eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 15:47:59 np0005532761 python3.9[126557]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:59 np0005532761 podman[126572]: 2025-11-23 20:47:59.852497274 +0000 UTC m=+0.019888807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:47:59 np0005532761 podman[126572]: 2025-11-23 20:47:59.947936019 +0000 UTC m=+0.115327532 container start eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:47:59 np0005532761 podman[126572]: 2025-11-23 20:47:59.950826204 +0000 UTC m=+0.118217737 container attach eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bartik, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:47:59 np0005532761 happy_bartik[126589]: 167 167
Nov 23 15:47:59 np0005532761 systemd[1]: libpod-eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183.scope: Deactivated successfully.
Nov 23 15:47:59 np0005532761 podman[126572]: 2025-11-23 20:47:59.954787105 +0000 UTC m=+0.122178628 container died eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:47:59 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b48349ad39c1f8a8bd0659abed510b7f55e4eb6a1733c3887b87a6b75f00c566-merged.mount: Deactivated successfully.
Nov 23 15:47:59 np0005532761 podman[126572]: 2025-11-23 20:47:59.996106237 +0000 UTC m=+0.163497750 container remove eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:48:00 np0005532761 systemd[1]: libpod-conmon-eec200f2d378396a3b9fe97a0b2788420e65ddbe8ce0a2b14a35b2fcecc4c183.scope: Deactivated successfully.
Nov 23 15:48:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:00 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:00 np0005532761 podman[126637]: 2025-11-23 20:48:00.15477964 +0000 UTC m=+0.040706046 container create ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tharp, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:48:00 np0005532761 systemd[1]: Started libpod-conmon-ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85.scope.
Nov 23 15:48:00 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:48:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c975d2147b6786909891aff600bc77b4a3247d25ea31e74ed91c6932bc7f202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c975d2147b6786909891aff600bc77b4a3247d25ea31e74ed91c6932bc7f202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c975d2147b6786909891aff600bc77b4a3247d25ea31e74ed91c6932bc7f202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c975d2147b6786909891aff600bc77b4a3247d25ea31e74ed91c6932bc7f202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c975d2147b6786909891aff600bc77b4a3247d25ea31e74ed91c6932bc7f202/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:00 np0005532761 podman[126637]: 2025-11-23 20:48:00.138484699 +0000 UTC m=+0.024411105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:48:00 np0005532761 podman[126637]: 2025-11-23 20:48:00.236133419 +0000 UTC m=+0.122059805 container init ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 15:48:00 np0005532761 podman[126637]: 2025-11-23 20:48:00.24385139 +0000 UTC m=+0.129777776 container start ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:48:00 np0005532761 podman[126637]: 2025-11-23 20:48:00.24654551 +0000 UTC m=+0.132471886 container attach ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:48:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:00 np0005532761 laughing_tharp[126654]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:48:00 np0005532761 laughing_tharp[126654]: --> All data devices are unavailable
Nov 23 15:48:00 np0005532761 systemd[1]: libpod-ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85.scope: Deactivated successfully.
Nov 23 15:48:00 np0005532761 podman[126637]: 2025-11-23 20:48:00.583455964 +0000 UTC m=+0.469382350 container died ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tharp, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:48:00 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2c975d2147b6786909891aff600bc77b4a3247d25ea31e74ed91c6932bc7f202-merged.mount: Deactivated successfully.
Nov 23 15:48:00 np0005532761 podman[126637]: 2025-11-23 20:48:00.62300533 +0000 UTC m=+0.508931706 container remove ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tharp, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:48:00 np0005532761 systemd[1]: libpod-conmon-ffb76f6f6c430b8933ed74ec3540c2885cd31234cae1e06c7efb231124eb1c85.scope: Deactivated successfully.
Nov 23 15:48:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:48:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:00 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:00 np0005532761 python3.9[126830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:00.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:01 np0005532761 podman[126948]: 2025-11-23 20:48:01.11832844 +0000 UTC m=+0.036036474 container create fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:48:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:01 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:01 np0005532761 systemd[1]: Started libpod-conmon-fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0.scope.
Nov 23 15:48:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:48:01 np0005532761 podman[126948]: 2025-11-23 20:48:01.189374502 +0000 UTC m=+0.107082556 container init fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:48:01 np0005532761 podman[126948]: 2025-11-23 20:48:01.19584803 +0000 UTC m=+0.113556064 container start fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feynman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 15:48:01 np0005532761 podman[126948]: 2025-11-23 20:48:01.101017432 +0000 UTC m=+0.018725486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:48:01 np0005532761 podman[126948]: 2025-11-23 20:48:01.199296009 +0000 UTC m=+0.117004063 container attach fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feynman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:48:01 np0005532761 goofy_feynman[126993]: 167 167
Nov 23 15:48:01 np0005532761 systemd[1]: libpod-fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0.scope: Deactivated successfully.
Nov 23 15:48:01 np0005532761 podman[126948]: 2025-11-23 20:48:01.202198094 +0000 UTC m=+0.119906128 container died fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feynman, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:48:01 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1ed3d4cf67ba5ef9f8e558e3243fc36e38df16c9082af809cd053296afc994b5-merged.mount: Deactivated successfully.
Nov 23 15:48:01 np0005532761 podman[126948]: 2025-11-23 20:48:01.240753145 +0000 UTC m=+0.158461179 container remove fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:48:01 np0005532761 systemd[1]: libpod-conmon-fddb72e472d8e306bd8b72194504bc0d813c49c762adbacca0c2bea88854cdf0.scope: Deactivated successfully.
Nov 23 15:48:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:01.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:01 np0005532761 python3.9[126990]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:01 np0005532761 podman[127017]: 2025-11-23 20:48:01.394082469 +0000 UTC m=+0.036031674 container create 6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:48:01 np0005532761 systemd[1]: Started libpod-conmon-6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3.scope.
Nov 23 15:48:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:48:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b438741d4c3fedf013f3102967fc18147c88e6d30ced1bc9b5def9a35a68a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b438741d4c3fedf013f3102967fc18147c88e6d30ced1bc9b5def9a35a68a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b438741d4c3fedf013f3102967fc18147c88e6d30ced1bc9b5def9a35a68a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b438741d4c3fedf013f3102967fc18147c88e6d30ced1bc9b5def9a35a68a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:01 np0005532761 podman[127017]: 2025-11-23 20:48:01.460257615 +0000 UTC m=+0.102206870 container init 6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:48:01 np0005532761 podman[127017]: 2025-11-23 20:48:01.466717552 +0000 UTC m=+0.108666757 container start 6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:48:01 np0005532761 podman[127017]: 2025-11-23 20:48:01.469288779 +0000 UTC m=+0.111237984 container attach 6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:48:01 np0005532761 podman[127017]: 2025-11-23 20:48:01.379009009 +0000 UTC m=+0.020958234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]: {
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:    "1": [
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:        {
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "devices": [
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "/dev/loop3"
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            ],
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "lv_name": "ceph_lv0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "lv_size": "21470642176",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "name": "ceph_lv0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "tags": {
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.cluster_name": "ceph",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.crush_device_class": "",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.encrypted": "0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.osd_id": "1",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.type": "block",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.vdo": "0",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:                "ceph.with_tpm": "0"
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            },
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "type": "block",
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:            "vg_name": "ceph_vg0"
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:        }
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]:    ]
Nov 23 15:48:01 np0005532761 dreamy_lichterman[127057]: }
Nov 23 15:48:01 np0005532761 systemd[1]: libpod-6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3.scope: Deactivated successfully.
Nov 23 15:48:01 np0005532761 podman[127017]: 2025-11-23 20:48:01.756139156 +0000 UTC m=+0.398088371 container died 6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:48:01 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0b438741d4c3fedf013f3102967fc18147c88e6d30ced1bc9b5def9a35a68a96-merged.mount: Deactivated successfully.
Nov 23 15:48:01 np0005532761 podman[127017]: 2025-11-23 20:48:01.798707359 +0000 UTC m=+0.440656564 container remove 6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:48:01 np0005532761 systemd[1]: libpod-conmon-6385d95cb274136559c046da39e80f7b5e9327ec11f413799e4c8108ea5632f3.scope: Deactivated successfully.
Nov 23 15:48:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:02 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:02 np0005532761 podman[127245]: 2025-11-23 20:48:02.333654007 +0000 UTC m=+0.055384436 container create e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 15:48:02 np0005532761 systemd[1]: Started libpod-conmon-e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6.scope.
Nov 23 15:48:02 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:48:02 np0005532761 podman[127245]: 2025-11-23 20:48:02.316732729 +0000 UTC m=+0.038463178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:48:02 np0005532761 podman[127245]: 2025-11-23 20:48:02.430429047 +0000 UTC m=+0.152159566 container init e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 15:48:02 np0005532761 podman[127245]: 2025-11-23 20:48:02.437544741 +0000 UTC m=+0.159275170 container start e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_dubinsky, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:48:02 np0005532761 modest_dubinsky[127286]: 167 167
Nov 23 15:48:02 np0005532761 systemd[1]: libpod-e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6.scope: Deactivated successfully.
Nov 23 15:48:02 np0005532761 podman[127245]: 2025-11-23 20:48:02.443876585 +0000 UTC m=+0.165607034 container attach e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_dubinsky, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:48:02 np0005532761 podman[127245]: 2025-11-23 20:48:02.444265495 +0000 UTC m=+0.165995924 container died e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_dubinsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:48:02 np0005532761 systemd[1]: var-lib-containers-storage-overlay-cb1e2a87d9d6aaab5e0a6c11ad03d5a04b1907a533d8eb73968dbdb9d1fe5940-merged.mount: Deactivated successfully.
Nov 23 15:48:02 np0005532761 podman[127245]: 2025-11-23 20:48:02.477954499 +0000 UTC m=+0.199684938 container remove e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_dubinsky, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:48:02 np0005532761 systemd[1]: libpod-conmon-e38495c5d769a4b123ed49535f005384ff0971dbba94d6b532b022591794e3d6.scope: Deactivated successfully.
Nov 23 15:48:02 np0005532761 python3.9[127318]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 15:48:02 np0005532761 podman[127338]: 2025-11-23 20:48:02.656347104 +0000 UTC m=+0.044601768 container create 8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:48:02 np0005532761 systemd[1]: Starting Time & Date Service...
Nov 23 15:48:02 np0005532761 systemd[1]: Started libpod-conmon-8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944.scope.
Nov 23 15:48:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:02 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:48:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d93c958db29de355c4838576c8cd6690b95fb51ffa1e3af97c4101f9b39c673/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:02 np0005532761 podman[127338]: 2025-11-23 20:48:02.636858518 +0000 UTC m=+0.025113202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:48:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d93c958db29de355c4838576c8cd6690b95fb51ffa1e3af97c4101f9b39c673/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d93c958db29de355c4838576c8cd6690b95fb51ffa1e3af97c4101f9b39c673/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d93c958db29de355c4838576c8cd6690b95fb51ffa1e3af97c4101f9b39c673/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:02 np0005532761 podman[127338]: 2025-11-23 20:48:02.754289593 +0000 UTC m=+0.142544257 container init 8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:48:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:02 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:02 np0005532761 podman[127338]: 2025-11-23 20:48:02.765217356 +0000 UTC m=+0.153472040 container start 8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:48:02 np0005532761 podman[127338]: 2025-11-23 20:48:02.76883883 +0000 UTC m=+0.157093514 container attach 8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:48:02 np0005532761 systemd[1]: Started Time & Date Service.
Nov 23 15:48:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:02.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:48:03
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:48:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:03 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44001b50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:48:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:48:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:03 np0005532761 lvm[127458]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:48:03 np0005532761 lvm[127458]: VG ceph_vg0 finished
Nov 23 15:48:03 np0005532761 sleepy_feistel[127356]: {}
Nov 23 15:48:03 np0005532761 systemd[1]: libpod-8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944.scope: Deactivated successfully.
Nov 23 15:48:03 np0005532761 podman[127338]: 2025-11-23 20:48:03.413318478 +0000 UTC m=+0.801573142 container died 8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:48:03 np0005532761 systemd[1]: libpod-8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944.scope: Consumed 1.028s CPU time.
Nov 23 15:48:03 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4d93c958db29de355c4838576c8cd6690b95fb51ffa1e3af97c4101f9b39c673-merged.mount: Deactivated successfully.
Nov 23 15:48:03 np0005532761 podman[127338]: 2025-11-23 20:48:03.461941138 +0000 UTC m=+0.850195802 container remove 8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 15:48:03 np0005532761 systemd[1]: libpod-conmon-8e3f472a094f24f30e2d51d7cbf9a1c303ee1541de80c7ef21c977b8c20f5944.scope: Deactivated successfully.
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:48:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:48:03 np0005532761 python3.9[127626]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:04 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:04 np0005532761 python3.9[127779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:48:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:04 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c0018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:04.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:05 np0005532761 python3.9[127857]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:05 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:05 np0005532761 python3.9[128010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:06 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44001b50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:06 np0005532761 python3.9[128089]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gtwhrgtz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:06 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:06.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:48:06.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:48:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:07 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c0018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:07 np0005532761 python3.9[128244]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:07] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Nov 23 15:48:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:07] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Nov 23 15:48:07 np0005532761 python3.9[128324]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:08 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:08 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44002860 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:08.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:09 np0005532761 python3.9[128479]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:09 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:10 np0005532761 python3[128633]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:48:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:10 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c0018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:48:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:10 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44002860 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:10 np0005532761 python3.9[128786]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:10.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:11 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:11.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:11 np0005532761 python3.9[128865]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:12 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d540043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:12 np0005532761 python3.9[129017]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:12 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:12 np0005532761 python3.9[129096]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:12.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:13 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:13.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:13 np0005532761 python3.9[129249]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:14 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:14 np0005532761 python3.9[129327]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:48:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:14 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:15 np0005532761 python3.9[129485]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:15 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d480008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:15 np0005532761 python3.9[129563]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:16 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:16 np0005532761 python3.9[129716]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:16 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44002860 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:48:16.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:48:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:16.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:17 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:17 np0005532761 python3.9[129795]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:17.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:17] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Nov 23 15:48:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:17] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Nov 23 15:48:18 np0005532761 python3.9[129972]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:18 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:48:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:48:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:18 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d240016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:18.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:19 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:19 np0005532761 python3.9[130129]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:19 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:48:19 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:48:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:19.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:20 np0005532761 python3.9[130282]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:20 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d480008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:20 np0005532761 python3.9[130435]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:48:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:20 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:48:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:20.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:48:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:21 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d240016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:22 np0005532761 python3.9[130588]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:48:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:22 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:22 np0005532761 python3.9[130741]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:48:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:22 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d480036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:22.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:23 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:23.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:23 np0005532761 systemd[1]: session-44.scope: Deactivated successfully.
Nov 23 15:48:23 np0005532761 systemd[1]: session-44.scope: Consumed 28.755s CPU time.
Nov 23 15:48:23 np0005532761 systemd-logind[820]: Session 44 logged out. Waiting for processes to exit.
Nov 23 15:48:23 np0005532761 systemd-logind[820]: Removed session 44.
Nov 23 15:48:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:24 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d240016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:24 np0005532761 systemd[1]: session-18.scope: Deactivated successfully.
Nov 23 15:48:24 np0005532761 systemd[1]: session-18.scope: Consumed 1min 33.200s CPU time.
Nov 23 15:48:24 np0005532761 systemd-logind[820]: Session 18 logged out. Waiting for processes to exit.
Nov 23 15:48:24 np0005532761 systemd-logind[820]: Removed session 18.
Nov 23 15:48:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:48:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:24 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:24.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:25 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:25.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:26 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:26 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:48:26.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:48:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:26.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:27 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:27.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:48:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:48:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:28 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:28 np0005532761 systemd-logind[820]: New session 45 of user zuul.
Nov 23 15:48:28 np0005532761 systemd[1]: Started Session 45 of User zuul.
Nov 23 15:48:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:28 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:28.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:29 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:29.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:29 np0005532761 python3.9[130928]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 15:48:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:30 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:30 np0005532761 python3.9[131081]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:48:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:48:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:30 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:31 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:31.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:31 np0005532761 python3.9[131236]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 23 15:48:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:32 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:32 np0005532761 python3.9[131388]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.sy18f9ax follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:32 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:32 np0005532761 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 15:48:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:33.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:33 np0005532761 python3.9[131517]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.sy18f9ax mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930911.861835-102-64062476074238/.source.sy18f9ax _original_basename=.pibgrfhr follow=False checksum=6cd7b37efcd593debc42fa9bb68a32d60f10fcfa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:48:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:48:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:33 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:48:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:48:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:48:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:48:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:48:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:48:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:33.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:34 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:34 np0005532761 python3.9[131669]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:48:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:48:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:34 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003fd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:35.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:35 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:35 np0005532761 python3.9[131823]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZyfELJX7KkP8E4Yo+r9guKNy64TSJDfB+rBUAclCyKwGxjxhBTRAJJCOL6kSBIkbUub9LTNVh+s271jrKlK1rYs22c1DFe3ci9hBERauX4lIaBHw9kJBHURb9cB+VbonXf0hAdqGDLTXdqFnbed2oU0ngSuVesO/C9+SCSZFsfERuUe3/SXKbWfjehgYTi4GquXo6Ynq1HopME6mRR8qGsv6sgdkxpSaUiwtSBG5ONOSyzrev1t2hdDsRxvbZAZgV2ab6IMD9DTKaIXphHpumL6txas+nKViUfm+gW6p6EKNdHb/VLha7ghY3p4LE3OdXM4eytxszF0Fzs/0CXzafNxHjVjHzqxrJBi/PT22i6QD60NTimabHulw8IkZG6KsuNVq1rmlSSGQGjqAs7l6hNH8kF4uq1JwOl6mVgct5iE+ZzhfO5WRWShiE1LlCZpqdYE9VqmBrK5r70N0srW3h2mb4lTAwvC089Vert64D29M7riepyGCrGInpE4aK7Sk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFop+sR8mOkxOfCCMKg8Voa+6Ns0zHMRLKg+WdnL56v#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ0Rj0/OjRh0AQLkOX0VueFFf3xD5FqSzewSN/8R0Xh0Ybf7bkNUGszKaTkKSUBKR2e9V/GwA+BxEChWtzU3sY=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfRiqah4FSYlin2mt3PYchMDfWNjxPXqcCCW7iymA93OXZ1reX9dxsJRSssuxIkwaYv7OC+wrUmMOsDhULhy9uNDku8TnHodZVNms8z3UwQW2GPePqEdQ56rKSJ5DhpY0ly7PapOQ69jitmBGQjsu8go19hV3djXlFm1du9V1HMnfGqyr5REZ5ACjW2Rr0108gdYgrt/xh+1sl7cgixK0vUKaqN47/VJHXSTk20aXknt5lhurSKMbRD4cgP1pz0lBJ8LfEvFajLlXBk7MtsI8L94qtHH20hWUk8P2FmqsM4LoLIY4YkAT6kzDPkNdC5F3bpl67NzNXKLdStChVsjRVgrsR0JhU4YO8nYPSqn85KWQUMsuQhXfeMPb5a0n4vSmF0hQhaTctIIK5Yq+qK3S5Ee0tV+ZLMcrYiRfVJYjULh+8LazeUYBtZAVkOoenlHNpcxfVl2v8Fx37PYu6wY/1Ol7i+Fyg+DMculPNu0E00hYIfuSPW06sm98V0zJ7bs=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0+oolG6Djq6MTp/HXh3SEc2a8aDRu5q8AnCiNHx/fN#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1GCZqvti/wHDh2Oo7NSAFToY/dykBAXL2bgJmg9kqKO2qTzfIYtCRiGP/x9yaw+D3ymaftMgdHgFkzRtYcXz0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3+sqhh74Wal6wWv19BRNHNnjTPYKculYCUftHSfYmbg5LryLTnsWAJdalXVBYQIJtq5uFrJRBG4C0R1XMU/MT4ZxuTtafwAzeTnKoCHbN/+mH31bndpvGKYRQ9AQHmamquyDQaSEjIYKFaK6eM7uVV/PaSZqasrB6awv3MeDH/GhtlyJwY7ble8M3UtG9jMWuPq/qX+TnKCZI3COyKBCe7F3aeaIewsho+T7qsRd8UNr55SHWJ1N6xYtA4FUayJ4cCZUeo4+SOJuQWb6A3HZm75y0LpdLDFH54DqyDqKVvDUfaKJJQV++3GT9kF9+jrwJDEK9VslSlEylLZ0zg1J0Z2zyMOwOAxBKEUXQNymC+00ybwJd4trP7KDy6+ZGOtHEThBgVO6vtuxQLWhseNa3otNXh7cHTf+Jfo7uo1wHbasd6aD1AVxvt4yKgOGy1ypt9Ps/COlbfHHFYZsI5gVLyJyK8aeipUjJUe6u6Qlf/F/inV1rwRBg8li7oeW7Ss=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE96kcIFDgsK09K4ZL9HihPRGUmf4YDgXlXqtYy0M8r#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJoWf98fFp9mmY0S22K7n+FjL7cDYCGLm8eglORId7ZBFp9PG5e8P+ws6VWjBbceNazmskqBYurrlrsvB4Mu40E=#012 create=True mode=0644 path=/tmp/ansible.sy18f9ax state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204835 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:48:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:36 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:48:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:36 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:36 np0005532761 python3.9[131975]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.sy18f9ax' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:48:36.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:48:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:37 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:37.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:37 np0005532761 python3.9[132156]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.sy18f9ax state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:37] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:48:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:37] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:48:38 np0005532761 systemd[1]: session-45.scope: Deactivated successfully.
Nov 23 15:48:38 np0005532761 systemd[1]: session-45.scope: Consumed 5.273s CPU time.
Nov 23 15:48:38 np0005532761 systemd-logind[820]: Session 45 logged out. Waiting for processes to exit.
Nov 23 15:48:38 np0005532761 systemd-logind[820]: Removed session 45.
Nov 23 15:48:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:38 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:48:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:38 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:39 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:39.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:40 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:48:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:40 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:41 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:41.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:42 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:48:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:42 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:43 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:43.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:43 np0005532761 systemd-logind[820]: New session 46 of user zuul.
Nov 23 15:48:43 np0005532761 systemd[1]: Started Session 46 of User zuul.
Nov 23 15:48:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d24003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:48:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:48:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:44 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:44 np0005532761 python3.9[132344]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:48:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:45.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:45 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:45.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.427969) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925427996, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1885, "num_deletes": 251, "total_data_size": 4047625, "memory_usage": 4102488, "flush_reason": "Manual Compaction"}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925439427, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2473480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10808, "largest_seqno": 12692, "table_properties": {"data_size": 2467201, "index_size": 3222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15622, "raw_average_key_size": 20, "raw_value_size": 2453502, "raw_average_value_size": 3178, "num_data_blocks": 143, "num_entries": 772, "num_filter_entries": 772, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930734, "oldest_key_time": 1763930734, "file_creation_time": 1763930925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 11527 microseconds, and 5152 cpu microseconds.
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.439493) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2473480 bytes OK
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.439513) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.447097) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.447110) EVENT_LOG_v1 {"time_micros": 1763930925447106, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.447125) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4039951, prev total WAL file size 4039951, number of live WAL files 2.
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.448069) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2415KB)], [26(13MB)]
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925448116, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16381245, "oldest_snapshot_seqno": -1}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4412 keys, 14652547 bytes, temperature: kUnknown
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925619742, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14652547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14618826, "index_size": 21579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111356, "raw_average_key_size": 25, "raw_value_size": 14534122, "raw_average_value_size": 3294, "num_data_blocks": 926, "num_entries": 4412, "num_filter_entries": 4412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763930925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.620212) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14652547 bytes
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.639742) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.4 rd, 85.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 13.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(12.5) write-amplify(5.9) OK, records in: 4845, records dropped: 433 output_compression: NoCompression
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.639780) EVENT_LOG_v1 {"time_micros": 1763930925639766, "job": 10, "event": "compaction_finished", "compaction_time_micros": 171782, "compaction_time_cpu_micros": 26300, "output_level": 6, "num_output_files": 1, "total_output_size": 14652547, "num_input_records": 4845, "num_output_records": 4412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925640382, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925642676, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.448010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.642704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.642707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.642709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.642710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:48:45.642711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:46 np0005532761 python3.9[132503]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 15:48:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:46 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:48:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:46 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c001160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:48:46.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:48:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:48:46.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:48:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:47.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:47 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:47.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:47 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:48:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:47 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:48:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:47] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:48:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:47] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:48:48 np0005532761 python3.9[132659]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:48:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:48:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:48:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:48 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:48:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:48 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:49.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:49 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c001160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:49 np0005532761 python3.9[132814]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:49.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:50 np0005532761 python3.9[132967]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:48:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:50 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:48:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:50 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:48:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:50 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:51.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:51 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:51 np0005532761 python3.9[133121]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:51 np0005532761 systemd[1]: session-46.scope: Deactivated successfully.
Nov 23 15:48:51 np0005532761 systemd[1]: session-46.scope: Consumed 3.831s CPU time.
Nov 23 15:48:51 np0005532761 systemd-logind[820]: Session 46 logged out. Waiting for processes to exit.
Nov 23 15:48:51 np0005532761 systemd-logind[820]: Removed session 46.
Nov 23 15:48:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:52 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c001160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:48:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:52 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:53.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:53 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:53.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:54 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c001160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:48:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:54 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:55 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:55.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:48:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:56 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:56 np0005532761 systemd-logind[820]: New session 47 of user zuul.
Nov 23 15:48:56 np0005532761 systemd[1]: Started Session 47 of User zuul.
Nov 23 15:48:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:48:56.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:48:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:57.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:57 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000043s ======
Nov 23 15:48:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Nov 23 15:48:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204857 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:48:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:57] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:48:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:48:57] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:48:57 np0005532761 python3.9[133332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:48:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:58 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d44003ee0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:48:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:48:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 2706 writes, 12K keys, 2706 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2706 writes, 2706 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2706 writes, 12K keys, 2706 commit groups, 1.0 writes per commit group, ingest: 24.28 MB, 0.04 MB/s#012Interval WAL: 2706 writes, 2706 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     86.5      0.24              0.04         5    0.049       0      0       0.0       0.0#012  L6      1/0   13.97 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.4     96.4     84.6      0.60              0.12         4    0.149     16K   1777       0.0       0.0#012 Sum      1/0   13.97 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.4     68.5     85.1      0.84              0.16         9    0.093     16K   1777       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.4     68.9     85.5      0.84              0.16         8    0.104     16K   1777       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     96.4     84.6      0.60              0.12         4    0.149     16K   1777       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     88.0      0.24              0.04         4    0.060       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.020#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf3f93d350#2 capacity: 304.00 MB usage: 2.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(156,1.89 MB,0.623166%) FilterBlock(10,56.73 KB,0.0182252%) IndexBlock(10,117.86 KB,0.0378609%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 15:48:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:58 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d48003fe0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:59.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:48:59 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2c001300 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:59 np0005532761 python3.9[133490]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:48:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:48:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000043s ======
Nov 23 15:48:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:59.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Nov 23 15:49:00 np0005532761 python3.9[133574]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:49:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:49:00 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004090 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:49:00 np0005532761 kernel: ganesha.nfsd[120142]: segfault at 50 ip 00007f3e087ca32e sp 00007f3dccff8210 error 4 in libntirpc.so.5.8[7f3e087af000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 23 15:49:00 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:49:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[117779]: 23/11/2025 20:49:00 : epoch 692372b4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d38004090 fd 15 proxy ignored for local
Nov 23 15:49:00 np0005532761 systemd[1]: Started Process Core Dump (PID 133577/UID 0).
Nov 23 15:49:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:01.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:01.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:01 np0005532761 systemd-coredump[133578]: Process 117791 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 47:#012#0  0x00007f3e087ca32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:49:02 np0005532761 systemd[1]: systemd-coredump@2-133577-0.service: Deactivated successfully.
Nov 23 15:49:02 np0005532761 systemd[1]: systemd-coredump@2-133577-0.service: Consumed 1.136s CPU time.
Nov 23 15:49:02 np0005532761 podman[133710]: 2025-11-23 20:49:02.077109057 +0000 UTC m=+0.037125956 container died afbd261f1081322f780a88f013ad53c885ceea79d632c52c83035d5780fc8073 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:49:02 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a686e2ed13e4739f4c51c24a9184656442faf40805bd271380bc7bb2349d7003-merged.mount: Deactivated successfully.
Nov 23 15:49:02 np0005532761 podman[133710]: 2025-11-23 20:49:02.123329441 +0000 UTC m=+0.083346350 container remove afbd261f1081322f780a88f013ad53c885ceea79d632c52c83035d5780fc8073 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 15:49:02 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:49:02 np0005532761 python3.9[133742]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:49:02 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:49:02 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.567s CPU time.
Nov 23 15:49:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:49:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:03.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:49:03
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.log', 'vms', '.nfs', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:49:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:49:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:49:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:49:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:03.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:03 np0005532761 python3.9[133931]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:49:04 np0005532761 python3.9[134164]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:49:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:49:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:05 np0005532761 podman[134335]: 2025-11-23 20:49:05.086775601 +0000 UTC m=+0.035782767 container create 96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:49:05 np0005532761 systemd[1]: Started libpod-conmon-96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71.scope.
Nov 23 15:49:05 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:49:05 np0005532761 podman[134335]: 2025-11-23 20:49:05.163904648 +0000 UTC m=+0.112911814 container init 96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 15:49:05 np0005532761 podman[134335]: 2025-11-23 20:49:05.071532434 +0000 UTC m=+0.020539620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:05 np0005532761 podman[134335]: 2025-11-23 20:49:05.170695495 +0000 UTC m=+0.119702661 container start 96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:49:05 np0005532761 podman[134335]: 2025-11-23 20:49:05.173461696 +0000 UTC m=+0.122468862 container attach 96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:49:05 np0005532761 exciting_shtern[134374]: 167 167
Nov 23 15:49:05 np0005532761 systemd[1]: libpod-96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71.scope: Deactivated successfully.
Nov 23 15:49:05 np0005532761 podman[134403]: 2025-11-23 20:49:05.208605394 +0000 UTC m=+0.023007458 container died 96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:49:05 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c252e1012cb4f2b812c51fe24104210cf743c47ad46d1ed0702f4be4bdf81607-merged.mount: Deactivated successfully.
Nov 23 15:49:05 np0005532761 podman[134403]: 2025-11-23 20:49:05.245589324 +0000 UTC m=+0.059991368 container remove 96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:49:05 np0005532761 systemd[1]: libpod-conmon-96976266c37e030b59d85e579babb407f20c07d41757939768af633115899d71.scope: Deactivated successfully.
Nov 23 15:49:05 np0005532761 podman[134452]: 2025-11-23 20:49:05.411720638 +0000 UTC m=+0.042551985 container create e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:49:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:05 np0005532761 python3.9[134444]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:49:05 np0005532761 systemd[1]: Started libpod-conmon-e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc.scope.
Nov 23 15:49:05 np0005532761 podman[134452]: 2025-11-23 20:49:05.391331015 +0000 UTC m=+0.022162372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:05 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:49:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b16f523c204a8fb4f36acf6438c74fe0cf52ef6061c13b97d37eb715ee755c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b16f523c204a8fb4f36acf6438c74fe0cf52ef6061c13b97d37eb715ee755c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b16f523c204a8fb4f36acf6438c74fe0cf52ef6061c13b97d37eb715ee755c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b16f523c204a8fb4f36acf6438c74fe0cf52ef6061c13b97d37eb715ee755c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b16f523c204a8fb4f36acf6438c74fe0cf52ef6061c13b97d37eb715ee755c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:05 np0005532761 podman[134452]: 2025-11-23 20:49:05.515049841 +0000 UTC m=+0.145881198 container init e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cannon, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:49:05 np0005532761 podman[134452]: 2025-11-23 20:49:05.523951171 +0000 UTC m=+0.154782518 container start e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:49:05 np0005532761 podman[134452]: 2025-11-23 20:49:05.527740516 +0000 UTC m=+0.158571853 container attach e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 15:49:05 np0005532761 infallible_cannon[134469]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:49:05 np0005532761 infallible_cannon[134469]: --> All data devices are unavailable
Nov 23 15:49:05 np0005532761 systemd[1]: libpod-e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc.scope: Deactivated successfully.
Nov 23 15:49:05 np0005532761 podman[134452]: 2025-11-23 20:49:05.867914369 +0000 UTC m=+0.498745746 container died e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cannon, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:49:05 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f9b16f523c204a8fb4f36acf6438c74fe0cf52ef6061c13b97d37eb715ee755c-merged.mount: Deactivated successfully.
Nov 23 15:49:05 np0005532761 podman[134452]: 2025-11-23 20:49:05.918133428 +0000 UTC m=+0.548964775 container remove e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:49:05 np0005532761 systemd[1]: libpod-conmon-e6c4412dcf20367e18b40cdcb0898c816c7751808262e4df7bf8693b5210e9cc.scope: Deactivated successfully.
Nov 23 15:49:06 np0005532761 systemd[1]: session-47.scope: Deactivated successfully.
Nov 23 15:49:06 np0005532761 systemd[1]: session-47.scope: Consumed 5.697s CPU time.
Nov 23 15:49:06 np0005532761 systemd-logind[820]: Session 47 logged out. Waiting for processes to exit.
Nov 23 15:49:06 np0005532761 systemd-logind[820]: Removed session 47.
Nov 23 15:49:06 np0005532761 podman[134615]: 2025-11-23 20:49:06.482538158 +0000 UTC m=+0.039602695 container create 3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 15:49:06 np0005532761 systemd[1]: Started libpod-conmon-3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512.scope.
Nov 23 15:49:06 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:49:06 np0005532761 podman[134615]: 2025-11-23 20:49:06.548545487 +0000 UTC m=+0.105610044 container init 3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:49:06 np0005532761 podman[134615]: 2025-11-23 20:49:06.554415565 +0000 UTC m=+0.111480102 container start 3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:49:06 np0005532761 podman[134615]: 2025-11-23 20:49:06.557254749 +0000 UTC m=+0.114319306 container attach 3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 15:49:06 np0005532761 wizardly_tesla[134631]: 167 167
Nov 23 15:49:06 np0005532761 systemd[1]: libpod-3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512.scope: Deactivated successfully.
Nov 23 15:49:06 np0005532761 podman[134615]: 2025-11-23 20:49:06.558762225 +0000 UTC m=+0.115826762 container died 3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:49:06 np0005532761 podman[134615]: 2025-11-23 20:49:06.468048914 +0000 UTC m=+0.025113481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:06 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1d0cf636af40d8b5e114f665a7a935ca13db5a6c887750e0b4e100c14a50f3c3-merged.mount: Deactivated successfully.
Nov 23 15:49:06 np0005532761 podman[134615]: 2025-11-23 20:49:06.588534948 +0000 UTC m=+0.145599485 container remove 3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 15:49:06 np0005532761 systemd[1]: libpod-conmon-3a7a4408d4b5435bc3d94d9b58cca3a871c8a22434baf617629fbaad0bcb0512.scope: Deactivated successfully.
Nov 23 15:49:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:06 np0005532761 podman[134655]: 2025-11-23 20:49:06.743349037 +0000 UTC m=+0.055097984 container create 62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_black, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:49:06 np0005532761 systemd[1]: Started libpod-conmon-62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44.scope.
Nov 23 15:49:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204906 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:49:06 np0005532761 podman[134655]: 2025-11-23 20:49:06.719977633 +0000 UTC m=+0.031726570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:06 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:49:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c932375fb48aa8493bf7eca8e2843a7f44304b1a89193adf4c849e818502f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c932375fb48aa8493bf7eca8e2843a7f44304b1a89193adf4c849e818502f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c932375fb48aa8493bf7eca8e2843a7f44304b1a89193adf4c849e818502f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c932375fb48aa8493bf7eca8e2843a7f44304b1a89193adf4c849e818502f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:06 np0005532761 podman[134655]: 2025-11-23 20:49:06.841078445 +0000 UTC m=+0.152827352 container init 62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_black, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:49:06 np0005532761 podman[134655]: 2025-11-23 20:49:06.847000985 +0000 UTC m=+0.158749892 container start 62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_black, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:49:06 np0005532761 podman[134655]: 2025-11-23 20:49:06.849592927 +0000 UTC m=+0.161341834 container attach 62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_black, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 23 15:49:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:49:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:06.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:49:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:06.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:49:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:07.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:07 np0005532761 compassionate_black[134672]: {
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:    "1": [
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:        {
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "devices": [
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "/dev/loop3"
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            ],
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "lv_name": "ceph_lv0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "lv_size": "21470642176",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "name": "ceph_lv0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "tags": {
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.cluster_name": "ceph",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.crush_device_class": "",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.encrypted": "0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.osd_id": "1",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.type": "block",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.vdo": "0",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:                "ceph.with_tpm": "0"
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            },
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "type": "block",
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:            "vg_name": "ceph_vg0"
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:        }
Nov 23 15:49:07 np0005532761 compassionate_black[134672]:    ]
Nov 23 15:49:07 np0005532761 compassionate_black[134672]: }
Nov 23 15:49:07 np0005532761 systemd[1]: libpod-62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44.scope: Deactivated successfully.
Nov 23 15:49:07 np0005532761 podman[134655]: 2025-11-23 20:49:07.205468638 +0000 UTC m=+0.517217565 container died 62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:49:07 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3c932375fb48aa8493bf7eca8e2843a7f44304b1a89193adf4c849e818502f81-merged.mount: Deactivated successfully.
Nov 23 15:49:07 np0005532761 podman[134655]: 2025-11-23 20:49:07.24548475 +0000 UTC m=+0.557233657 container remove 62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_black, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 23 15:49:07 np0005532761 systemd[1]: libpod-conmon-62189a521401cd922a6ab300cf3c3eed927861e2b4032f733f3a8689f5014a44.scope: Deactivated successfully.
Nov 23 15:49:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:07.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:07 np0005532761 podman[134787]: 2025-11-23 20:49:07.853153974 +0000 UTC m=+0.041027127 container create defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:49:07 np0005532761 systemd[1]: Started libpod-conmon-defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc.scope.
Nov 23 15:49:07 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:49:07 np0005532761 podman[134787]: 2025-11-23 20:49:07.918648511 +0000 UTC m=+0.106521674 container init defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:49:07 np0005532761 podman[134787]: 2025-11-23 20:49:07.923894711 +0000 UTC m=+0.111767884 container start defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:49:07 np0005532761 podman[134787]: 2025-11-23 20:49:07.927268099 +0000 UTC m=+0.115141242 container attach defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 15:49:07 np0005532761 podman[134787]: 2025-11-23 20:49:07.832738971 +0000 UTC m=+0.020612164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:07 np0005532761 intelligent_austin[134804]: 167 167
Nov 23 15:49:07 np0005532761 systemd[1]: libpod-defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc.scope: Deactivated successfully.
Nov 23 15:49:07 np0005532761 conmon[134804]: conmon defcbc60868246265f8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc.scope/container/memory.events
Nov 23 15:49:07 np0005532761 podman[134787]: 2025-11-23 20:49:07.932169664 +0000 UTC m=+0.120042807 container died defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 23 15:49:07 np0005532761 systemd[1]: var-lib-containers-storage-overlay-6f554b73da3f02a973eaf30adab9e30675f7c92e6d1ea53013256c8f8f334f0c-merged.mount: Deactivated successfully.
Nov 23 15:49:07 np0005532761 podman[134787]: 2025-11-23 20:49:07.962139305 +0000 UTC m=+0.150012448 container remove defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:49:07 np0005532761 systemd[1]: libpod-conmon-defcbc60868246265f8c272c305b40711c5cacfc1afc10fb36882b659b0979cc.scope: Deactivated successfully.
Nov 23 15:49:08 np0005532761 podman[134828]: 2025-11-23 20:49:08.113490431 +0000 UTC m=+0.034878568 container create b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_shamir, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:49:08 np0005532761 systemd[1]: Started libpod-conmon-b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57.scope.
Nov 23 15:49:08 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:49:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5541e77522204f881a5ff69118032b93725e835bbb8c53cf00dcf1a97df6f601/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5541e77522204f881a5ff69118032b93725e835bbb8c53cf00dcf1a97df6f601/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5541e77522204f881a5ff69118032b93725e835bbb8c53cf00dcf1a97df6f601/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5541e77522204f881a5ff69118032b93725e835bbb8c53cf00dcf1a97df6f601/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:08 np0005532761 podman[134828]: 2025-11-23 20:49:08.190638829 +0000 UTC m=+0.112027056 container init b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:49:08 np0005532761 podman[134828]: 2025-11-23 20:49:08.098459823 +0000 UTC m=+0.019847980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:08 np0005532761 podman[134828]: 2025-11-23 20:49:08.202335342 +0000 UTC m=+0.123723479 container start b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:49:08 np0005532761 podman[134828]: 2025-11-23 20:49:08.20572551 +0000 UTC m=+0.127113657 container attach b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_shamir, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:49:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:08 np0005532761 lvm[134919]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:49:08 np0005532761 lvm[134919]: VG ceph_vg0 finished
Nov 23 15:49:08 np0005532761 practical_shamir[134844]: {}
Nov 23 15:49:08 np0005532761 systemd[1]: libpod-b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57.scope: Deactivated successfully.
Nov 23 15:49:08 np0005532761 systemd[1]: libpod-b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57.scope: Consumed 1.033s CPU time.
Nov 23 15:49:08 np0005532761 podman[134828]: 2025-11-23 20:49:08.892380651 +0000 UTC m=+0.813768788 container died b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:49:08 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5541e77522204f881a5ff69118032b93725e835bbb8c53cf00dcf1a97df6f601-merged.mount: Deactivated successfully.
Nov 23 15:49:08 np0005532761 podman[134828]: 2025-11-23 20:49:08.935692517 +0000 UTC m=+0.857080654 container remove b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:49:08 np0005532761 systemd[1]: libpod-conmon-b9c1a5e21ba0988c42b66b8347e4ec6428b7e6b9dea7b4d48f1448db24cd6d57.scope: Deactivated successfully.
Nov 23 15:49:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:49:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:09.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:09.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.843324) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949843355, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 480, "num_deletes": 251, "total_data_size": 514234, "memory_usage": 524320, "flush_reason": "Manual Compaction"}
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949852479, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 509525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12693, "largest_seqno": 13172, "table_properties": {"data_size": 506826, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6430, "raw_average_key_size": 18, "raw_value_size": 501364, "raw_average_value_size": 1440, "num_data_blocks": 32, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930926, "oldest_key_time": 1763930926, "file_creation_time": 1763930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9191 microseconds, and 2143 cpu microseconds.
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.852516) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 509525 bytes OK
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.852533) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854501) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854518) EVENT_LOG_v1 {"time_micros": 1763930949854513, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854534) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 511427, prev total WAL file size 511427, number of live WAL files 2.
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.855034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(497KB)], [29(13MB)]
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949855262, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15162072, "oldest_snapshot_seqno": -1}
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4245 keys, 13434951 bytes, temperature: kUnknown
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949995874, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13434951, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13403990, "index_size": 19267, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108800, "raw_average_key_size": 25, "raw_value_size": 13323806, "raw_average_value_size": 3138, "num_data_blocks": 815, "num_entries": 4245, "num_filter_entries": 4245, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:49:09 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.996213) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13434951 bytes
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:10.001415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.8 rd, 95.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(56.1) write-amplify(26.4) OK, records in: 4760, records dropped: 515 output_compression: NoCompression
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:10.001467) EVENT_LOG_v1 {"time_micros": 1763930950001444, "job": 12, "event": "compaction_finished", "compaction_time_micros": 140655, "compaction_time_cpu_micros": 51583, "output_level": 6, "num_output_files": 1, "total_output_size": 13434951, "num_input_records": 4760, "num_output_records": 4245, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950001886, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950007918, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:10.008057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:10.008063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:10.008066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:10.008069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:49:10.008071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:11.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:11 np0005532761 systemd-logind[820]: New session 48 of user zuul.
Nov 23 15:49:11 np0005532761 systemd[1]: Started Session 48 of User zuul.
Nov 23 15:49:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:12 np0005532761 python3.9[135114]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:49:12 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 3.
Nov 23 15:49:12 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:49:12 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.567s CPU time.
Nov 23 15:49:12 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:49:12 np0005532761 podman[135193]: 2025-11-23 20:49:12.684615078 +0000 UTC m=+0.042498566 container create d1c856a933f7db9f33255bf549b266be6faedbbb2f939c6711a125b33ec1224c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:49:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7662d69bc305e2e6b3cb0d8c4b113481dea13f1cf5583f747e67608d7ef3a143/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7662d69bc305e2e6b3cb0d8c4b113481dea13f1cf5583f747e67608d7ef3a143/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7662d69bc305e2e6b3cb0d8c4b113481dea13f1cf5583f747e67608d7ef3a143/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7662d69bc305e2e6b3cb0d8c4b113481dea13f1cf5583f747e67608d7ef3a143/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:12 np0005532761 podman[135193]: 2025-11-23 20:49:12.739149405 +0000 UTC m=+0.097032933 container init d1c856a933f7db9f33255bf549b266be6faedbbb2f939c6711a125b33ec1224c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:49:12 np0005532761 podman[135193]: 2025-11-23 20:49:12.743634705 +0000 UTC m=+0.101518213 container start d1c856a933f7db9f33255bf549b266be6faedbbb2f939c6711a125b33ec1224c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:49:12 np0005532761 bash[135193]: d1c856a933f7db9f33255bf549b266be6faedbbb2f939c6711a125b33ec1224c
Nov 23 15:49:12 np0005532761 podman[135193]: 2025-11-23 20:49:12.66931487 +0000 UTC m=+0.027198388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:49:12 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:49:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:12 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:49:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:13.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:13.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:14 np0005532761 python3.9[135378]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:49:14 np0005532761 python3.9[135531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:15.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:15.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:15 np0005532761 python3.9[135684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:16 np0005532761 python3.9[135807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930955.1350822-156-197556348451840/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ea06720fc091f17b08df15e697cb426fd0b6b991 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:49:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:16.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:49:17 np0005532761 python3.9[135960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:17.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:17 np0005532761 python3.9[136084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930956.5687785-156-213537343888883/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=837e8dcdbcb3ca01e6b5360b86e6942411e1cc1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:18 np0005532761 python3.9[136261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:49:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:49:18 np0005532761 python3.9[136385]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930957.7141829-156-29569663556091/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a77b8952779ba8ef2b1c03bbe297caf41242313c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:49:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:18 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:49:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:18 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:49:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:19.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:19.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:19 np0005532761 python3.9[136538]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:20 np0005532761 python3.9[136690]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:49:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:21.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:21 np0005532761 python3.9[136844]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:21.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:21 np0005532761 python3.9[136967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930960.5987358-330-195840201656722/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=74c5cbb51aff5d04f03bc6fd2f8d25b77e75fa97 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:22 np0005532761 python3.9[137119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:49:22 np0005532761 python3.9[137243]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930961.7842004-330-262749289286714/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=26cfebde0335fa79ed2e9639d0ee86f73b64ddb4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:23 np0005532761 python3.9[137396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:24 np0005532761 python3.9[137519]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930962.8963523-330-151263773993153/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a54b32234fbed42e86491e518df558f0ff5b7ff3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:49:24 np0005532761 python3.9[137672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:49:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:24 : epoch 69237348 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:49:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:25.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:25 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d60000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:25.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:25 np0005532761 python3.9[137840]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:26 np0005532761 python3.9[137992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:26 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:49:26 np0005532761 python3.9[138116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930965.6619842-494-66516054182683/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fee4ac2598b995d86fecb765146635733af26893 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:26 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d400016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:26.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:49:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:27.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:27 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d34000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:27 np0005532761 python3.9[138269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:27.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:27] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:49:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:27] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:49:27 np0005532761 python3.9[138392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930966.89439-494-150969065787992/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=26cfebde0335fa79ed2e9639d0ee86f73b64ddb4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:28 np0005532761 python3.9[138544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:28 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d54001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:49:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204928 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:49:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:28 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:29.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:29 np0005532761 python3.9[138669]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930968.084407-494-167503192604720/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3ed3bd3f502ec44013322250ae91c4192611bdae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:29 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d400016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:29.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:30 np0005532761 python3.9[138822]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:30 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:49:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:30 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d54001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:31.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:31 np0005532761 python3.9[138976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:31 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:31 np0005532761 python3.9[139101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930970.6847463-681-93951877485626/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:32 np0005532761 python3.9[139253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:32 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d400023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:49:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:32 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:33 np0005532761 python3.9[139406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:49:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:49:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:33 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d54001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:49:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:49:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:49:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:49:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:49:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:49:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:33 np0005532761 python3.9[139530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930972.6168945-757-31476414379932/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:34 np0005532761 python3.9[139684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:34 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:49:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:34 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d400023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:35 np0005532761 python3.9[139837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:35 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:35 np0005532761 python3.9[139961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930974.5193543-827-203719385669140/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 23 15:49:36 np0005532761 python3.9[140113]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:36 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d54001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:49:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:36 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:49:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:37.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:37 np0005532761 python3.9[140267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:37 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d400023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:37.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:37 np0005532761 python3.9[140390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930976.654862-900-12318705673238/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:37] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:37] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:38 np0005532761 python3.9[140567]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:38 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d34002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:49:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:38 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d540031e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:39 np0005532761 python3.9[140722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:39 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:39.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:39 np0005532761 python3.9[140846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930978.4907527-967-66986649118712/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:40 np0005532761 python3.9[140998]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:40 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d40003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:49:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:40 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d40003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:40 np0005532761 python3.9[141151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:41.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:41 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d540031e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:41 np0005532761 python3.9[141275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930980.5166733-1033-120689363808200/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:41.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:42 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:42 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d40003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:43.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:43 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d34002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:43.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:44 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d34002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:49:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[135208]: 23/11/2025 20:49:44 : epoch 69237348 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7d58002b30 fd 39 proxy ignored for local
Nov 23 15:49:44 np0005532761 kernel: ganesha.nfsd[137676]: segfault at 50 ip 00007f7e0987932e sp 00007f7dd4ff8210 error 4 in libntirpc.so.5.8[7f7e0985e000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 23 15:49:44 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:49:44 np0005532761 systemd[1]: Started Process Core Dump (PID 141307/UID 0).
Nov 23 15:49:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:45.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:45.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:45 np0005532761 systemd-coredump[141308]: Process 135212 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 42:#012#0  0x00007f7e0987932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:49:46 np0005532761 systemd[1]: systemd-coredump@3-141307-0.service: Deactivated successfully.
Nov 23 15:49:46 np0005532761 systemd[1]: systemd-coredump@3-141307-0.service: Consumed 1.147s CPU time.
Nov 23 15:49:46 np0005532761 podman[141314]: 2025-11-23 20:49:46.08899636 +0000 UTC m=+0.023460198 container died d1c856a933f7db9f33255bf549b266be6faedbbb2f939c6711a125b33ec1224c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:49:46 np0005532761 systemd[1]: var-lib-containers-storage-overlay-7662d69bc305e2e6b3cb0d8c4b113481dea13f1cf5583f747e67608d7ef3a143-merged.mount: Deactivated successfully.
Nov 23 15:49:46 np0005532761 podman[141314]: 2025-11-23 20:49:46.129989705 +0000 UTC m=+0.064453543 container remove d1c856a933f7db9f33255bf549b266be6faedbbb2f939c6711a125b33ec1224c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 15:49:46 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:49:46 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:49:46 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.328s CPU time.
Nov 23 15:49:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:46.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:49:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:47 np0005532761 systemd[1]: session-48.scope: Deactivated successfully.
Nov 23 15:49:47 np0005532761 systemd[1]: session-48.scope: Consumed 22.222s CPU time.
Nov 23 15:49:47 np0005532761 systemd-logind[820]: Session 48 logged out. Waiting for processes to exit.
Nov 23 15:49:47 np0005532761 systemd-logind[820]: Removed session 48.
Nov 23 15:49:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:47] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:47] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 23 15:49:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:49:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:49:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:49.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:49:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/204950 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:49:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:49:52 np0005532761 systemd-logind[820]: New session 49 of user zuul.
Nov 23 15:49:52 np0005532761 systemd[1]: Started Session 49 of User zuul.
Nov 23 15:49:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:53.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:53 np0005532761 python3.9[141518]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:54 np0005532761 python3.9[141671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:49:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:55.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:55 np0005532761 python3.9[141795]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930994.081563-62-211584448055824/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=756e8313f47ae598921d0392828cdc60f53012e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:56 np0005532761 python3.9[141947]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:56 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 4.
Nov 23 15:49:56 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:49:56 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.328s CPU time.
Nov 23 15:49:56 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:49:56 np0005532761 podman[142124]: 2025-11-23 20:49:56.674483179 +0000 UTC m=+0.036451505 container create 54b1614ab5ca6d906b414bcdf51f1e0562e28dacc02e62bfd8c67a1a7f46cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 15:49:56 np0005532761 python3.9[142090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930995.6585739-62-256578770528933/.source.conf _original_basename=ceph.conf follow=False checksum=d92b20e9a86369ec384ba170ca716bfc5aeaba51 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:49:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476171b081ef26795da06e70ed6e39b57c960a61697d79c2e4d6df0e734cb32d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476171b081ef26795da06e70ed6e39b57c960a61697d79c2e4d6df0e734cb32d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476171b081ef26795da06e70ed6e39b57c960a61697d79c2e4d6df0e734cb32d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476171b081ef26795da06e70ed6e39b57c960a61697d79c2e4d6df0e734cb32d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:49:56 np0005532761 podman[142124]: 2025-11-23 20:49:56.65765753 +0000 UTC m=+0.019625876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:49:56 np0005532761 podman[142124]: 2025-11-23 20:49:56.759319966 +0000 UTC m=+0.121288352 container init 54b1614ab5ca6d906b414bcdf51f1e0562e28dacc02e62bfd8c67a1a7f46cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:49:56 np0005532761 podman[142124]: 2025-11-23 20:49:56.765760547 +0000 UTC m=+0.127728873 container start 54b1614ab5ca6d906b414bcdf51f1e0562e28dacc02e62bfd8c67a1a7f46cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:49:56 np0005532761 bash[142124]: 54b1614ab5ca6d906b414bcdf51f1e0562e28dacc02e62bfd8c67a1a7f46cbf3
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:49:56 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:49:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:49:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:49:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:49:57.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:49:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:57 np0005532761 systemd[1]: session-49.scope: Deactivated successfully.
Nov 23 15:49:57 np0005532761 systemd[1]: session-49.scope: Consumed 2.614s CPU time.
Nov 23 15:49:57 np0005532761 systemd-logind[820]: Session 49 logged out. Waiting for processes to exit.
Nov 23 15:49:57 np0005532761 systemd-logind[820]: Removed session 49.
Nov 23 15:49:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:57] "GET /metrics HTTP/1.1" 200 48325 "" "Prometheus/2.51.0"
Nov 23 15:49:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:49:57] "GET /metrics HTTP/1.1" 200 48325 "" "Prometheus/2.51.0"
Nov 23 15:49:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:49:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:49:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:49:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 23 15:50:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:50:00 np0005532761 ceph-mon[74569]: overall HEALTH_OK
Nov 23 15:50:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:02 np0005532761 systemd-logind[820]: New session 50 of user zuul.
Nov 23 15:50:02 np0005532761 systemd[1]: Started Session 50 of User zuul.
Nov 23 15:50:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:50:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:02 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:50:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:02 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:50:03
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['images', 'default.rgw.control', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes']
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:50:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:03.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:50:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:50:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:50:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:03.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:03 np0005532761 python3.9[142394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:50:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:50:04 np0005532761 python3.9[142553]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:05 np0005532761 python3.9[142706]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:06 np0005532761 python3.9[142856]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:50:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:50:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:07.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:50:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:07.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:50:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:07 np0005532761 python3.9[143010]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 15:50:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:50:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:50:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:50:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:09.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:50:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:50:09 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 23 15:50:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:50:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:09.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:50:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 python3.9[143299]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:50:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:50:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:10 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:10 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:10 np0005532761 podman[143511]: 2025-11-23 20:50:10.989427749 +0000 UTC m=+0.037926980 container create 3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 15:50:11 np0005532761 systemd[1]: Started libpod-conmon-3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7.scope.
Nov 23 15:50:11 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:50:11 np0005532761 podman[143511]: 2025-11-23 20:50:10.972827834 +0000 UTC m=+0.021327085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:50:11 np0005532761 podman[143511]: 2025-11-23 20:50:11.069604313 +0000 UTC m=+0.118103574 container init 3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:50:11 np0005532761 podman[143511]: 2025-11-23 20:50:11.07639865 +0000 UTC m=+0.124897881 container start 3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:50:11 np0005532761 podman[143511]: 2025-11-23 20:50:11.079524565 +0000 UTC m=+0.128023796 container attach 3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:50:11 np0005532761 systemd[1]: libpod-3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7.scope: Deactivated successfully.
Nov 23 15:50:11 np0005532761 interesting_babbage[143530]: 167 167
Nov 23 15:50:11 np0005532761 conmon[143530]: conmon 3d4c24d9bdb6dc04a4d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7.scope/container/memory.events
Nov 23 15:50:11 np0005532761 podman[143511]: 2025-11-23 20:50:11.083220476 +0000 UTC m=+0.131719727 container died 3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:50:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:11 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5d9da93b9e23bae85141c8c8874f66a369374ba03b401498980eddb8c823db5d-merged.mount: Deactivated successfully.
Nov 23 15:50:11 np0005532761 podman[143511]: 2025-11-23 20:50:11.173100427 +0000 UTC m=+0.221599658 container remove 3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 15:50:11 np0005532761 python3.9[143519]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:50:11 np0005532761 systemd[1]: libpod-conmon-3d4c24d9bdb6dc04a4d95fa2f376eaf216303f8ad1512df1c4bd4deb6eedbff7.scope: Deactivated successfully.
Nov 23 15:50:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:11 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:11 np0005532761 podman[143554]: 2025-11-23 20:50:11.324013859 +0000 UTC m=+0.047948924 container create 02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:50:11 np0005532761 systemd[1]: Started libpod-conmon-02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084.scope.
Nov 23 15:50:11 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:50:11 np0005532761 podman[143554]: 2025-11-23 20:50:11.306978873 +0000 UTC m=+0.030913928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:50:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419898538a940b4b595006db6168256caae1f903d437b7f84a4d2a27f81f7a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419898538a940b4b595006db6168256caae1f903d437b7f84a4d2a27f81f7a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419898538a940b4b595006db6168256caae1f903d437b7f84a4d2a27f81f7a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419898538a940b4b595006db6168256caae1f903d437b7f84a4d2a27f81f7a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419898538a940b4b595006db6168256caae1f903d437b7f84a4d2a27f81f7a11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:11 np0005532761 podman[143554]: 2025-11-23 20:50:11.410259599 +0000 UTC m=+0.134194704 container init 02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:50:11 np0005532761 podman[143554]: 2025-11-23 20:50:11.422027592 +0000 UTC m=+0.145962617 container start 02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ellis, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:50:11 np0005532761 podman[143554]: 2025-11-23 20:50:11.426848424 +0000 UTC m=+0.150783549 container attach 02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ellis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 15:50:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:11.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:11 np0005532761 charming_ellis[143571]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:50:11 np0005532761 charming_ellis[143571]: --> All data devices are unavailable
Nov 23 15:50:11 np0005532761 systemd[1]: libpod-02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084.scope: Deactivated successfully.
Nov 23 15:50:11 np0005532761 podman[143554]: 2025-11-23 20:50:11.780552288 +0000 UTC m=+0.504487323 container died 02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:50:11 np0005532761 systemd[1]: var-lib-containers-storage-overlay-419898538a940b4b595006db6168256caae1f903d437b7f84a4d2a27f81f7a11-merged.mount: Deactivated successfully.
Nov 23 15:50:11 np0005532761 podman[143554]: 2025-11-23 20:50:11.840951561 +0000 UTC m=+0.564886596 container remove 02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:50:11 np0005532761 systemd[1]: libpod-conmon-02d9dcfa3f84078ea95fc3012723fd5cd0442811261efcac03830a59daa13084.scope: Deactivated successfully.
Nov 23 15:50:12 np0005532761 podman[143689]: 2025-11-23 20:50:12.425950907 +0000 UTC m=+0.043152322 container create 3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:50:12 np0005532761 systemd[1]: Started libpod-conmon-3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10.scope.
Nov 23 15:50:12 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:50:12 np0005532761 podman[143689]: 2025-11-23 20:50:12.404611183 +0000 UTC m=+0.021812588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:50:12 np0005532761 podman[143689]: 2025-11-23 20:50:12.508161208 +0000 UTC m=+0.125362603 container init 3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:50:12 np0005532761 podman[143689]: 2025-11-23 20:50:12.513781092 +0000 UTC m=+0.130982487 container start 3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 15:50:12 np0005532761 podman[143689]: 2025-11-23 20:50:12.517190115 +0000 UTC m=+0.134391550 container attach 3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:50:12 np0005532761 charming_cartwright[143705]: 167 167
Nov 23 15:50:12 np0005532761 systemd[1]: libpod-3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10.scope: Deactivated successfully.
Nov 23 15:50:12 np0005532761 podman[143689]: 2025-11-23 20:50:12.518652895 +0000 UTC m=+0.135854300 container died 3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:50:12 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0eeef12c0dc947a3dadd7c33905bf1fd6606ac545d08eb8fbd1fe75cd8576b2d-merged.mount: Deactivated successfully.
Nov 23 15:50:12 np0005532761 podman[143689]: 2025-11-23 20:50:12.557701794 +0000 UTC m=+0.174903179 container remove 3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 15:50:12 np0005532761 systemd[1]: libpod-conmon-3231d77ac3512194eadf1ff8b6e4a5bc0215221ec6df73d9cfad6244c12dbf10.scope: Deactivated successfully.
Nov 23 15:50:12 np0005532761 podman[143753]: 2025-11-23 20:50:12.691385614 +0000 UTC m=+0.036424828 container create da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 15:50:12 np0005532761 systemd[1]: Started libpod-conmon-da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de.scope.
Nov 23 15:50:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:50:12 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:50:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:12 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c59c5ee7f59998791af3bc0d1461d5fd916a1675c5446fa3781d34ec863371/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c59c5ee7f59998791af3bc0d1461d5fd916a1675c5446fa3781d34ec863371/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c59c5ee7f59998791af3bc0d1461d5fd916a1675c5446fa3781d34ec863371/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c59c5ee7f59998791af3bc0d1461d5fd916a1675c5446fa3781d34ec863371/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:12 np0005532761 podman[143753]: 2025-11-23 20:50:12.77525376 +0000 UTC m=+0.120293034 container init da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cray, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:50:12 np0005532761 podman[143753]: 2025-11-23 20:50:12.677126183 +0000 UTC m=+0.022165407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:50:12 np0005532761 podman[143753]: 2025-11-23 20:50:12.783014473 +0000 UTC m=+0.128053707 container start da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cray, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:50:12 np0005532761 podman[143753]: 2025-11-23 20:50:12.786822467 +0000 UTC m=+0.131861711 container attach da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:50:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205012 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:50:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:12 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a18001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:13.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:13 np0005532761 clever_cray[143801]: {
Nov 23 15:50:13 np0005532761 clever_cray[143801]:    "1": [
Nov 23 15:50:13 np0005532761 clever_cray[143801]:        {
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "devices": [
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "/dev/loop3"
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            ],
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "lv_name": "ceph_lv0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "lv_size": "21470642176",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "name": "ceph_lv0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "tags": {
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.cluster_name": "ceph",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.crush_device_class": "",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.encrypted": "0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.osd_id": "1",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.type": "block",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.vdo": "0",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:                "ceph.with_tpm": "0"
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            },
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "type": "block",
Nov 23 15:50:13 np0005532761 clever_cray[143801]:            "vg_name": "ceph_vg0"
Nov 23 15:50:13 np0005532761 clever_cray[143801]:        }
Nov 23 15:50:13 np0005532761 clever_cray[143801]:    ]
Nov 23 15:50:13 np0005532761 clever_cray[143801]: }
Nov 23 15:50:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:13 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:13 np0005532761 systemd[1]: libpod-da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de.scope: Deactivated successfully.
Nov 23 15:50:13 np0005532761 podman[143753]: 2025-11-23 20:50:13.246995915 +0000 UTC m=+0.592035179 container died da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:50:13 np0005532761 systemd[1]: var-lib-containers-storage-overlay-53c59c5ee7f59998791af3bc0d1461d5fd916a1675c5446fa3781d34ec863371-merged.mount: Deactivated successfully.
Nov 23 15:50:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:13 np0005532761 podman[143753]: 2025-11-23 20:50:13.558839483 +0000 UTC m=+0.903878697 container remove da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:50:13 np0005532761 systemd[1]: libpod-conmon-da67dfabe5bddea015be4766017742827adfc4ffb52ad45f8f440a979b9734de.scope: Deactivated successfully.
Nov 23 15:50:13 np0005532761 python3.9[143922]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:50:14 np0005532761 podman[144070]: 2025-11-23 20:50:14.090102217 +0000 UTC m=+0.028962823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:50:14 np0005532761 podman[144070]: 2025-11-23 20:50:14.582713134 +0000 UTC m=+0.521573720 container create e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poincare, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:50:14 np0005532761 systemd[1]: Started libpod-conmon-e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26.scope.
Nov 23 15:50:14 np0005532761 python3[144185]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 23 15:50:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:50:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:14 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:50:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:14 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:14 np0005532761 podman[144070]: 2025-11-23 20:50:14.849958951 +0000 UTC m=+0.788819587 container init e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Nov 23 15:50:14 np0005532761 podman[144070]: 2025-11-23 20:50:14.857278211 +0000 UTC m=+0.796138807 container start e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:50:14 np0005532761 naughty_poincare[144188]: 167 167
Nov 23 15:50:14 np0005532761 systemd[1]: libpod-e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26.scope: Deactivated successfully.
Nov 23 15:50:14 np0005532761 podman[144070]: 2025-11-23 20:50:14.910545869 +0000 UTC m=+0.849406455 container attach e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poincare, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:50:14 np0005532761 podman[144070]: 2025-11-23 20:50:14.912118332 +0000 UTC m=+0.850978958 container died e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poincare, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:50:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0509496daa0dbacc3e610c91b5728b810f4c583db4ebd90fbb0ce988484cd2a7-merged.mount: Deactivated successfully.
Nov 23 15:50:15 np0005532761 podman[144070]: 2025-11-23 20:50:15.184164661 +0000 UTC m=+1.123025257 container remove e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poincare, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:50:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:15 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:15 np0005532761 systemd[1]: libpod-conmon-e7c46ec9e55a83e9c0fc45eb91e0c7439103e6dbe82c912e7f2dee4c5c9e5d26.scope: Deactivated successfully.
Nov 23 15:50:15 np0005532761 podman[144337]: 2025-11-23 20:50:15.36421078 +0000 UTC m=+0.061917526 container create e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:50:15 np0005532761 systemd[1]: Started libpod-conmon-e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b.scope.
Nov 23 15:50:15 np0005532761 podman[144337]: 2025-11-23 20:50:15.334032554 +0000 UTC m=+0.031739490 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:50:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:50:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fda02a1757acbdb07980a587969853f5c52a789b7c74d3189807c0c2789e07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fda02a1757acbdb07980a587969853f5c52a789b7c74d3189807c0c2789e07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fda02a1757acbdb07980a587969853f5c52a789b7c74d3189807c0c2789e07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fda02a1757acbdb07980a587969853f5c52a789b7c74d3189807c0c2789e07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:50:15 np0005532761 podman[144337]: 2025-11-23 20:50:15.490784095 +0000 UTC m=+0.188490891 container init e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:50:15 np0005532761 podman[144337]: 2025-11-23 20:50:15.499477913 +0000 UTC m=+0.197184709 container start e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:50:15 np0005532761 podman[144337]: 2025-11-23 20:50:15.50337093 +0000 UTC m=+0.201077716 container attach e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 15:50:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:15 np0005532761 python3.9[144379]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:16 np0005532761 lvm[144533]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:50:16 np0005532761 lvm[144533]: VG ceph_vg0 finished
Nov 23 15:50:16 np0005532761 thirsty_ellis[144382]: {}
Nov 23 15:50:16 np0005532761 systemd[1]: libpod-e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b.scope: Deactivated successfully.
Nov 23 15:50:16 np0005532761 systemd[1]: libpod-e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b.scope: Consumed 1.135s CPU time.
Nov 23 15:50:16 np0005532761 podman[144337]: 2025-11-23 20:50:16.228672357 +0000 UTC m=+0.926379113 container died e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:50:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e9fda02a1757acbdb07980a587969853f5c52a789b7c74d3189807c0c2789e07-merged.mount: Deactivated successfully.
Nov 23 15:50:16 np0005532761 podman[144337]: 2025-11-23 20:50:16.295579859 +0000 UTC m=+0.993286635 container remove e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ellis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:50:16 np0005532761 systemd[1]: libpod-conmon-e61633abb6eee7eee724337cb4b6db4e9497acf64a53dc15a5cba59a94a8b28b.scope: Deactivated successfully.
Nov 23 15:50:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:50:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:50:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:16 np0005532761 python3.9[144647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:50:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:16 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:16 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:17.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:50:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:17.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:50:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:17.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:50:17 np0005532761 python3.9[144727]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:17.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:17 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:17 np0005532761 python3.9[144880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:50:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:50:18 np0005532761 python3.9[144983]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3i1od5x0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:50:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:18 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:18 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:19 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:19 np0005532761 python3.9[145137]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:19 np0005532761 python3.9[145215]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:50:20 np0005532761 python3.9[145368]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:20 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:20 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:21 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:21.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:21 np0005532761 python3[145522]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:50:22 np0005532761 python3.9[145674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:50:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:22 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:22 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:23 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:23 np0005532761 python3.9[145801]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931021.9821525-431-165727268133241/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:24 np0005532761 python3.9[145955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:24 np0005532761 python3.9[146081]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931023.5750802-476-148531252356175/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:50:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:24 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:24 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:25 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:25 np0005532761 python3.9[146234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:26 np0005532761 python3.9[146359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931025.210123-521-91955119365030/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:26 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:26 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:27.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:50:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:27.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:50:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:50:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:50:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:27 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:27 np0005532761 python3.9[146513]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:27.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:50:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:50:27 np0005532761 python3.9[146638]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931026.7877352-566-171111371775649/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:28 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:28 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:28 np0005532761 python3.9[146791]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:29.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:29 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:29 np0005532761 python3.9[146919]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931028.3199508-611-229773900652841/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:30 np0005532761 python3.9[147072]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:50:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:30 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:30 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:31 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:31 np0005532761 python3.9[147225]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:32 np0005532761 python3.9[147380]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:32 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:32 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:33.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:50:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:50:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:50:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:50:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:50:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:50:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:50:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:50:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:33 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:33 np0005532761 python3.9[147534]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:33.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:34 np0005532761 python3.9[147687]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:50:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:50:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:34 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:34 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:35 np0005532761 python3.9[147843]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:35.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:35 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:35.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:35 np0005532761 python3.9[147998]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:50:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.4 total, 600.0 interval#012Cumulative writes: 7175 writes, 29K keys, 7175 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7175 writes, 1282 syncs, 5.60 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7175 writes, 29K keys, 7175 commit groups, 1.0 writes per commit group, ingest: 20.52 MB, 0.03 MB/s#012Interval WAL: 7175 writes, 1282 syncs, 5.60 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 15:50:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:36 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:36 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:37.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:50:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:37 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:37 np0005532761 python3.9[148150]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:50:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:37.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:37] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:37] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:38 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:38 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:38 np0005532761 python3.9[148329]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:38 np0005532761 ovs-vsctl[148331]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 23 15:50:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:50:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:50:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:39 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:39.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:39 np0005532761 python3.9[148483]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:50:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:40 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:40 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:40 np0005532761 python3.9[148639]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:40 np0005532761 ovs-vsctl[148641]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 23 15:50:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:41 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:41.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:42 np0005532761 python3.9[148791]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:50:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:42 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:42 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:42 np0005532761 python3.9[148946]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:43 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:43.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:43 np0005532761 python3.9[149101]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:44 np0005532761 python3.9[149179]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:50:44 np0005532761 python3.9[149332]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:44 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:44 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:45.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:45 np0005532761 python3.9[149411]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:45 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:45.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:46 np0005532761 python3.9[149565]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:46 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:46 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:47.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:50:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:47 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c0020a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:47 np0005532761 python3.9[149719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:47.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:47 np0005532761 python3.9[149799]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:47] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:47] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:50:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:50:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:50:48 np0005532761 python3.9[149952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:48 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:48 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:49 np0005532761 python3.9[150031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:49 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:49.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:50 np0005532761 python3.9[150183]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:50:50 np0005532761 systemd[1]: Reloading.
Nov 23 15:50:50 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:50:50 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:50:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:50:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:50 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c0020a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:50 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:51.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:51 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:51 np0005532761 python3.9[150375]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:51.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:52 np0005532761 python3.9[150453]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:52 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:52 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c002240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:52 np0005532761 python3.9[150606]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:50:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:50:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:53 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:53 np0005532761 python3.9[150685]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:53.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:54 np0005532761 python3.9[150839]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:50:54 np0005532761 systemd[1]: Reloading.
Nov 23 15:50:54 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:50:54 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:50:54 np0005532761 systemd[1]: Starting Create netns directory...
Nov 23 15:50:54 np0005532761 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:50:54 np0005532761 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:50:54 np0005532761 systemd[1]: Finished Create netns directory.
Nov 23 15:50:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:50:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:54 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:54 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:55.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:55 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:55.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:55 np0005532761 python3.9[151036]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:56 np0005532761 python3.9[151189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:57.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:50:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:50:57.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:50:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:57 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:57 np0005532761 python3.9[151313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931056.2580152-1364-67714697740971/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:57] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:50:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:50:57] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:50:58 np0005532761 python3.9[151466]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:50:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:58 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:58 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:50:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:50:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:50:59 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:59 np0005532761 python3.9[151645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:50:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:00 np0005532761 python3.9[151768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931058.8421493-1439-56839898652356/.source.json _original_basename=.2_ikadvt follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:51:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:00 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:00 np0005532761 python3.9[151921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:00 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:01.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:01 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:01.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:02 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:02 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:51:03
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'backups', 'vms', '.mgr', 'default.rgw.meta']
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:51:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:51:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:51:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:51:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:51:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:03 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:03.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:03 np0005532761 python3.9[152351]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 23 15:51:04 np0005532761 python3.9[152506]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 15:51:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:51:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:04 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:04 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:05 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:05.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:05 np0005532761 python3.9[152659]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 15:51:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:06 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:06 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:07.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:51:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:07.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:07 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:07.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:51:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:51:07 np0005532761 python3[152840]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 15:51:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:08 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:08 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:09.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:51:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:10 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:10 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:11 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:11.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:12 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:12 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:13.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:13 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:13.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:51:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:14 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:14 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:15 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:15.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:16 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:16 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:17.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:51:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:17.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:51:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:17 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:17.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:51:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:51:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:51:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:51:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:18 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:18 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:19 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:19.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:20 np0005532761 podman[152853]: 2025-11-23 20:51:20.051222089 +0000 UTC m=+12.233318997 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:20 np0005532761 podman[153094]: 2025-11-23 20:51:20.164937314 +0000 UTC m=+0.022010800 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 15:51:20 np0005532761 podman[153094]: 2025-11-23 20:51:20.269456341 +0000 UTC m=+0.126529777 container create 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 23 15:51:20 np0005532761 python3[152840]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 15:51:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:51:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:20 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:20 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:51:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:51:21 np0005532761 python3.9[153288]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:51:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:21 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:21.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:51:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:51:22 np0005532761 python3.9[153443]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:51:22 np0005532761 python3.9[153570]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:51:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:22 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:51:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:22 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:22 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:22 np0005532761 podman[153623]: 2025-11-23 20:51:22.88939505 +0000 UTC m=+0.070285942 container create c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:51:22 np0005532761 podman[153623]: 2025-11-23 20:51:22.840123295 +0000 UTC m=+0.021014237 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:51:22 np0005532761 systemd[1]: Started libpod-conmon-c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d.scope.
Nov 23 15:51:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:51:23 np0005532761 podman[153623]: 2025-11-23 20:51:23.034178569 +0000 UTC m=+0.215069471 container init c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_murdock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:51:23 np0005532761 podman[153623]: 2025-11-23 20:51:23.041897195 +0000 UTC m=+0.222788087 container start c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:51:23 np0005532761 reverent_murdock[153679]: 167 167
Nov 23 15:51:23 np0005532761 systemd[1]: libpod-c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d.scope: Deactivated successfully.
Nov 23 15:51:23 np0005532761 podman[153623]: 2025-11-23 20:51:23.068399315 +0000 UTC m=+0.249290257 container attach c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:51:23 np0005532761 podman[153623]: 2025-11-23 20:51:23.069870208 +0000 UTC m=+0.250761090 container died c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:51:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e1880090293b0cba2505c082927fad959a3674261b062cb7793c568cb6f3cd87-merged.mount: Deactivated successfully.
Nov 23 15:51:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:23.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:23 np0005532761 podman[153623]: 2025-11-23 20:51:23.247128252 +0000 UTC m=+0.428019144 container remove c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:51:23 np0005532761 systemd[1]: libpod-conmon-c004f7928ea9583b1fb9cd1d4f04dcb8ac80f384ca4ff45c80928b359b315d8d.scope: Deactivated successfully.
Nov 23 15:51:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:23 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:23 np0005532761 podman[153804]: 2025-11-23 20:51:23.420722084 +0000 UTC m=+0.073445554 container create 23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:51:23 np0005532761 podman[153804]: 2025-11-23 20:51:23.369917754 +0000 UTC m=+0.022641244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:51:23 np0005532761 systemd[1]: Started libpod-conmon-23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f.scope.
Nov 23 15:51:23 np0005532761 python3.9[153795]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931082.8399243-1703-41467871299175/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:51:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8cae71760941a61f1fef555bae50e1f6ba64d7328273ea3b5534cec69ce78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8cae71760941a61f1fef555bae50e1f6ba64d7328273ea3b5534cec69ce78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8cae71760941a61f1fef555bae50e1f6ba64d7328273ea3b5534cec69ce78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8cae71760941a61f1fef555bae50e1f6ba64d7328273ea3b5534cec69ce78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8cae71760941a61f1fef555bae50e1f6ba64d7328273ea3b5534cec69ce78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:23 np0005532761 podman[153804]: 2025-11-23 20:51:23.575272365 +0000 UTC m=+0.227995845 container init 23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:51:23 np0005532761 podman[153804]: 2025-11-23 20:51:23.584012803 +0000 UTC m=+0.236736273 container start 23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 15:51:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:23.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:23 np0005532761 podman[153804]: 2025-11-23 20:51:23.641401623 +0000 UTC m=+0.294125123 container attach 23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_poincare, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:51:23 np0005532761 happy_poincare[153820]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:51:23 np0005532761 happy_poincare[153820]: --> All data devices are unavailable
Nov 23 15:51:23 np0005532761 systemd[1]: libpod-23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f.scope: Deactivated successfully.
Nov 23 15:51:23 np0005532761 podman[153804]: 2025-11-23 20:51:23.927710807 +0000 UTC m=+0.580434317 container died 23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_poincare, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:51:24 np0005532761 python3.9[153900]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:51:24 np0005532761 systemd[1]: Reloading.
Nov 23 15:51:24 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:51:24 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:51:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:51:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:24 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:24 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:25 np0005532761 python3.9[154036]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:51:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:25.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:25 np0005532761 systemd[1]: Reloading.
Nov 23 15:51:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:25 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:25 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:51:25 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:51:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:25.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:25 np0005532761 systemd[1]: Starting ovn_controller container...
Nov 23 15:51:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a3b8cae71760941a61f1fef555bae50e1f6ba64d7328273ea3b5534cec69ce78-merged.mount: Deactivated successfully.
Nov 23 15:51:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:26 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:26 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:27.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:51:27 np0005532761 podman[153804]: 2025-11-23 20:51:27.167771792 +0000 UTC m=+3.820495262 container remove 23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_poincare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:51:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:27.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:27 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:27 np0005532761 systemd[1]: libpod-conmon-23597c19d4d8ef303fcc6f4244735091a61d849c6c2f6f62e8eda4f981fb754f.scope: Deactivated successfully.
Nov 23 15:51:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:51:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd5f4c5b6c2c16bec44814989f4f666342266815a23454bfe19df912651450d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:27.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:27 np0005532761 systemd[1]: Started /usr/bin/podman healthcheck run 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b.
Nov 23 15:51:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:51:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:51:27 np0005532761 podman[154079]: 2025-11-23 20:51:27.872521204 +0000 UTC m=+2.025972187 container init 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 23 15:51:27 np0005532761 ovn_controller[154148]: + sudo -E kolla_set_configs
Nov 23 15:51:27 np0005532761 podman[154079]: 2025-11-23 20:51:27.905552952 +0000 UTC m=+2.059003825 container start 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 15:51:27 np0005532761 edpm-start-podman-container[154079]: ovn_controller
Nov 23 15:51:27 np0005532761 systemd[1]: Created slice User Slice of UID 0.
Nov 23 15:51:27 np0005532761 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 23 15:51:27 np0005532761 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 23 15:51:27 np0005532761 systemd[1]: Starting User Manager for UID 0...
Nov 23 15:51:27 np0005532761 podman[154180]: 2025-11-23 20:51:27.979267282 +0000 UTC m=+0.066029347 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 15:51:27 np0005532761 systemd[1]: 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b-1ea8efa9c8141b16.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 15:51:27 np0005532761 systemd[1]: 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b-1ea8efa9c8141b16.service: Failed with result 'exit-code'.
Nov 23 15:51:27 np0005532761 edpm-start-podman-container[154077]: Creating additional drop-in dependency for "ovn_controller" (93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b)
Nov 23 15:51:28 np0005532761 systemd[1]: Reloading.
Nov 23 15:51:28 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:51:28 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:51:28 np0005532761 systemd[154212]: Queued start job for default target Main User Target.
Nov 23 15:51:28 np0005532761 systemd[154212]: Created slice User Application Slice.
Nov 23 15:51:28 np0005532761 systemd[154212]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 23 15:51:28 np0005532761 systemd[154212]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:51:28 np0005532761 systemd[154212]: Reached target Paths.
Nov 23 15:51:28 np0005532761 systemd[154212]: Reached target Timers.
Nov 23 15:51:28 np0005532761 systemd[154212]: Starting D-Bus User Message Bus Socket...
Nov 23 15:51:28 np0005532761 systemd[154212]: Starting Create User's Volatile Files and Directories...
Nov 23 15:51:28 np0005532761 podman[154250]: 2025-11-23 20:51:28.147744817 +0000 UTC m=+0.080628726 container create 09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:51:28 np0005532761 systemd[154212]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:51:28 np0005532761 systemd[154212]: Reached target Sockets.
Nov 23 15:51:28 np0005532761 systemd[154212]: Finished Create User's Volatile Files and Directories.
Nov 23 15:51:28 np0005532761 systemd[154212]: Reached target Basic System.
Nov 23 15:51:28 np0005532761 systemd[154212]: Reached target Main User Target.
Nov 23 15:51:28 np0005532761 systemd[154212]: Startup finished in 172ms.
Nov 23 15:51:28 np0005532761 podman[154250]: 2025-11-23 20:51:28.089626611 +0000 UTC m=+0.022510540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:51:28 np0005532761 systemd[1]: Started User Manager for UID 0.
Nov 23 15:51:28 np0005532761 systemd[1]: Started ovn_controller container.
Nov 23 15:51:28 np0005532761 systemd[1]: Started libpod-conmon-09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a.scope.
Nov 23 15:51:28 np0005532761 systemd[1]: Started Session c1 of User root.
Nov 23 15:51:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: INFO:__main__:Validating config file
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: INFO:__main__:Writing out command to execute
Nov 23 15:51:28 np0005532761 podman[154250]: 2025-11-23 20:51:28.444143491 +0000 UTC m=+0.377027420 container init 09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:51:28 np0005532761 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: ++ cat /run_command
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + ARGS=
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + sudo kolla_copy_cacerts
Nov 23 15:51:28 np0005532761 podman[154250]: 2025-11-23 20:51:28.452770486 +0000 UTC m=+0.385654395 container start 09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:51:28 np0005532761 stoic_bouman[154303]: 167 167
Nov 23 15:51:28 np0005532761 systemd[1]: libpod-09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a.scope: Deactivated successfully.
Nov 23 15:51:28 np0005532761 systemd[1]: Started Session c2 of User root.
Nov 23 15:51:28 np0005532761 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + [[ ! -n '' ]]
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + . kolla_extend_start
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + umask 0022
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5200] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5207] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5216] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5221] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5224] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 23 15:51:28 np0005532761 kernel: br-int: entered promiscuous mode
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:28 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5467] manager: (ovn-d8ff4a-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5474] manager: (ovn-6de892-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5479] manager: (ovn-10e3bf-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 23 15:51:28 np0005532761 systemd-udevd[154357]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:51:28 np0005532761 kernel: genev_sys_6081: entered promiscuous mode
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5736] device (genev_sys_6081): carrier: link connected
Nov 23 15:51:28 np0005532761 NetworkManager[49067]: <info>  [1763931088.5739] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 23 15:51:28 np0005532761 systemd-udevd[154365]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:51:28 np0005532761 podman[154250]: 2025-11-23 20:51:28.764989488 +0000 UTC m=+0.697873397 container attach 09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:51:28 np0005532761 podman[154250]: 2025-11-23 20:51:28.767984485 +0000 UTC m=+0.700868394 container died 09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:51:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:28 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:28 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:29 np0005532761 python3.9[154490]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:51:29 np0005532761 ovs-vsctl[154491]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 23 15:51:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:29.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ac934179809ebc1e7dbf41d181192f619d6e73df6209042aebb23f6e40f4024c-merged.mount: Deactivated successfully.
Nov 23 15:51:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:29 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:29 np0005532761 podman[154250]: 2025-11-23 20:51:29.371073365 +0000 UTC m=+1.303957274 container remove 09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 15:51:29 np0005532761 systemd[1]: libpod-conmon-09119fc0b61a07a4114ddf85ec8caecae24cfd2d80b3429027fba323c0b1090a.scope: Deactivated successfully.
Nov 23 15:51:29 np0005532761 podman[154524]: 2025-11-23 20:51:29.534361314 +0000 UTC m=+0.030400430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:51:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:29.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:30 np0005532761 podman[154524]: 2025-11-23 20:51:30.269713319 +0000 UTC m=+0.765752385 container create f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_tesla, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:51:30 np0005532761 python3.9[154666]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:51:30 np0005532761 ovs-vsctl[154669]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 23 15:51:30 np0005532761 systemd[1]: Started libpod-conmon-f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825.scope.
Nov 23 15:51:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:51:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f110d3d324ba82f93faeb32ddf645170a9d66b6054ba53a2010dc4b4210365/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f110d3d324ba82f93faeb32ddf645170a9d66b6054ba53a2010dc4b4210365/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f110d3d324ba82f93faeb32ddf645170a9d66b6054ba53a2010dc4b4210365/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f110d3d324ba82f93faeb32ddf645170a9d66b6054ba53a2010dc4b4210365/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:51:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:30 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:30 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:31 np0005532761 podman[154524]: 2025-11-23 20:51:31.11215923 +0000 UTC m=+1.608198316 container init f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:51:31 np0005532761 podman[154524]: 2025-11-23 20:51:31.122106055 +0000 UTC m=+1.618145131 container start f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:51:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:31.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:31 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:31 np0005532761 python3.9[154831]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:51:31 np0005532761 ovs-vsctl[154836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 23 15:51:31 np0005532761 eager_tesla[154698]: {
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:    "1": [
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:        {
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "devices": [
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "/dev/loop3"
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            ],
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "lv_name": "ceph_lv0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "lv_size": "21470642176",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "name": "ceph_lv0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "tags": {
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.cluster_name": "ceph",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.crush_device_class": "",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.encrypted": "0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.osd_id": "1",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.type": "block",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.vdo": "0",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:                "ceph.with_tpm": "0"
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            },
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "type": "block",
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:            "vg_name": "ceph_vg0"
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:        }
Nov 23 15:51:31 np0005532761 eager_tesla[154698]:    ]
Nov 23 15:51:31 np0005532761 eager_tesla[154698]: }
Nov 23 15:51:31 np0005532761 systemd[1]: libpod-f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825.scope: Deactivated successfully.
Nov 23 15:51:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:31.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:31 np0005532761 podman[154524]: 2025-11-23 20:51:31.678520417 +0000 UTC m=+2.174559503 container attach f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 15:51:31 np0005532761 podman[154524]: 2025-11-23 20:51:31.681216538 +0000 UTC m=+2.177255644 container died f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_tesla, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:51:31 np0005532761 systemd[1]: session-50.scope: Deactivated successfully.
Nov 23 15:51:31 np0005532761 systemd[1]: session-50.scope: Consumed 55.020s CPU time.
Nov 23 15:51:31 np0005532761 systemd-logind[820]: Session 50 logged out. Waiting for processes to exit.
Nov 23 15:51:31 np0005532761 systemd-logind[820]: Removed session 50.
Nov 23 15:51:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:32 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:32 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:51:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:51:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:33.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:51:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:51:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:51:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:51:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:51:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:51:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:33 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-16f110d3d324ba82f93faeb32ddf645170a9d66b6054ba53a2010dc4b4210365-merged.mount: Deactivated successfully.
Nov 23 15:51:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:33.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:34 np0005532761 podman[154524]: 2025-11-23 20:51:34.098478146 +0000 UTC m=+4.594517212 container remove f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 15:51:34 np0005532761 systemd[1]: libpod-conmon-f4d1ff70d51068374cf4dfa7ffd76e3061724d993c95a3247603a75c3fe7e825.scope: Deactivated successfully.
Nov 23 15:51:34 np0005532761 podman[154972]: 2025-11-23 20:51:34.674467863 +0000 UTC m=+0.029409907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:51:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:51:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:34 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:34 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:35 np0005532761 podman[154972]: 2025-11-23 20:51:35.16179722 +0000 UTC m=+0.516739214 container create aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wescoff, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:51:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:35 np0005532761 systemd[1]: Started libpod-conmon-aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6.scope.
Nov 23 15:51:35 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:51:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:35 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:35 np0005532761 podman[154972]: 2025-11-23 20:51:35.335262799 +0000 UTC m=+0.690204863 container init aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 15:51:35 np0005532761 podman[154972]: 2025-11-23 20:51:35.344314294 +0000 UTC m=+0.699256288 container start aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wescoff, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 15:51:35 np0005532761 strange_wescoff[154989]: 167 167
Nov 23 15:51:35 np0005532761 systemd[1]: libpod-aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6.scope: Deactivated successfully.
Nov 23 15:51:35 np0005532761 podman[154972]: 2025-11-23 20:51:35.625768078 +0000 UTC m=+0.980710172 container attach aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wescoff, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:51:35 np0005532761 podman[154972]: 2025-11-23 20:51:35.627081248 +0000 UTC m=+0.982023322 container died aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 23 15:51:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:35.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:35 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0e9f551e860e9bc0e1a419a5af97d254414f52e00fbab19eab6aac2e0df83796-merged.mount: Deactivated successfully.
Nov 23 15:51:36 np0005532761 podman[154972]: 2025-11-23 20:51:36.157534033 +0000 UTC m=+1.512476027 container remove aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:51:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:36 np0005532761 systemd[1]: libpod-conmon-aef5a5fb7c392c3f42670b4ae32b3cc91268b49c2c7444457c933ec6541073d6.scope: Deactivated successfully.
Nov 23 15:51:36 np0005532761 podman[155015]: 2025-11-23 20:51:36.381950095 +0000 UTC m=+0.064176115 container create 6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:51:36 np0005532761 podman[155015]: 2025-11-23 20:51:36.338575053 +0000 UTC m=+0.020801103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:51:36 np0005532761 systemd[1]: Started libpod-conmon-6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d.scope.
Nov 23 15:51:36 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:51:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3096ed60a7149462f49c0cf7d169d6766e49576bab88cdf58006c8433849387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3096ed60a7149462f49c0cf7d169d6766e49576bab88cdf58006c8433849387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3096ed60a7149462f49c0cf7d169d6766e49576bab88cdf58006c8433849387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3096ed60a7149462f49c0cf7d169d6766e49576bab88cdf58006c8433849387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:36 np0005532761 podman[155015]: 2025-11-23 20:51:36.575331205 +0000 UTC m=+0.257557225 container init 6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:51:36 np0005532761 podman[155015]: 2025-11-23 20:51:36.583007859 +0000 UTC m=+0.265233879 container start 6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:51:36 np0005532761 podman[155015]: 2025-11-23 20:51:36.674172544 +0000 UTC m=+0.356398584 container attach 6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:51:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:36 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:36 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:37.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:51:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:37.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:51:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:37.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:37 np0005532761 lvm[155107]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:51:37 np0005532761 lvm[155107]: VG ceph_vg0 finished
Nov 23 15:51:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:37 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:37 np0005532761 boring_wright[155032]: {}
Nov 23 15:51:37 np0005532761 systemd[1]: libpod-6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d.scope: Deactivated successfully.
Nov 23 15:51:37 np0005532761 systemd[1]: libpod-6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d.scope: Consumed 1.116s CPU time.
Nov 23 15:51:37 np0005532761 podman[155015]: 2025-11-23 20:51:37.35483898 +0000 UTC m=+1.037065010 container died 6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:51:37 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d3096ed60a7149462f49c0cf7d169d6766e49576bab88cdf58006c8433849387-merged.mount: Deactivated successfully.
Nov 23 15:51:37 np0005532761 podman[155015]: 2025-11-23 20:51:37.397342824 +0000 UTC m=+1.079568834 container remove 6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:51:37 np0005532761 systemd[1]: libpod-conmon-6a11e514dcd670d8ab12de7542518fe4bbdcd13e659e8f4827cbe18a4332b44d.scope: Deactivated successfully.
Nov 23 15:51:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:51:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:51:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:37.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:37] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:51:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:37] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:51:38 np0005532761 systemd-logind[820]: New session 52 of user zuul.
Nov 23 15:51:38 np0005532761 systemd[1]: Started Session 52 of User zuul.
Nov 23 15:51:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:38 np0005532761 systemd[1]: Stopping User Manager for UID 0...
Nov 23 15:51:38 np0005532761 systemd[154212]: Activating special unit Exit the Session...
Nov 23 15:51:38 np0005532761 systemd[154212]: Stopped target Main User Target.
Nov 23 15:51:38 np0005532761 systemd[154212]: Stopped target Basic System.
Nov 23 15:51:38 np0005532761 systemd[154212]: Stopped target Paths.
Nov 23 15:51:38 np0005532761 systemd[154212]: Stopped target Sockets.
Nov 23 15:51:38 np0005532761 systemd[154212]: Stopped target Timers.
Nov 23 15:51:38 np0005532761 systemd[154212]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 15:51:38 np0005532761 systemd[154212]: Closed D-Bus User Message Bus Socket.
Nov 23 15:51:38 np0005532761 systemd[154212]: Stopped Create User's Volatile Files and Directories.
Nov 23 15:51:38 np0005532761 systemd[154212]: Removed slice User Application Slice.
Nov 23 15:51:38 np0005532761 systemd[154212]: Reached target Shutdown.
Nov 23 15:51:38 np0005532761 systemd[154212]: Finished Exit the Session.
Nov 23 15:51:38 np0005532761 systemd[154212]: Reached target Exit the Session.
Nov 23 15:51:38 np0005532761 systemd[1]: user@0.service: Deactivated successfully.
Nov 23 15:51:38 np0005532761 systemd[1]: Stopped User Manager for UID 0.
Nov 23 15:51:38 np0005532761 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 23 15:51:38 np0005532761 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 23 15:51:38 np0005532761 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 23 15:51:38 np0005532761 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 23 15:51:38 np0005532761 systemd[1]: Removed slice User Slice of UID 0.
Nov 23 15:51:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:38 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:38 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:39 np0005532761 python3.9[155328]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:51:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:39.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:39 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:39.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:40 np0005532761 python3.9[155486]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:51:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:40 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:40 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 23 15:51:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:41.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 23 15:51:41 np0005532761 python3.9[155639]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:41 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:41.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:41 np0005532761 python3.9[155791]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:42 np0005532761 python3.9[155944]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:42 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:42 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:43.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:43 np0005532761 python3.9[156097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:43 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:43.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:51:44 np0005532761 python3.9[156248]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:51:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:44 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:44 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:45.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:45 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:45.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:45 np0005532761 python3.9[156401]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 15:51:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:46 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205146 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:51:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:46 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:47.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:51:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:47.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:47 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:47.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:47] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:51:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:47] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Nov 23 15:51:47 np0005532761 python3.9[156554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:51:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:51:48 np0005532761 python3.9[156675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931107.214411-218-66941757812440/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:51:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:48 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:48 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:49.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:49 np0005532761 python3.9[156827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:49 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:49.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:49 np0005532761 python3.9[156950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931108.814706-263-157472609534682/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:51:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:50 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:50 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:51 np0005532761 python3.9[157103]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:51:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:51.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:51 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:51.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:51 np0005532761 python3.9[157188]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:51:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:51:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:52 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:52 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:53.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:53 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:53.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:54 np0005532761 python3.9[157344]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:51:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:51:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:54 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:54 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:55.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:55 np0005532761 python3.9[157499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:55 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:55.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:55 np0005532761 python3.9[157620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931114.807195-374-277477951487461/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:56 np0005532761 python3.9[157770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:51:56 np0005532761 python3.9[157892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931115.9220724-374-275756149542297/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:56 : epoch 69237374 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:51:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:57.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:51:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:57.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:51:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:51:57.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:51:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:57 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:57.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:57] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:51:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:51:57] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Nov 23 15:51:58 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:58Z|00025|memory|INFO|16256 kB peak resident set size after 30.0 seconds
Nov 23 15:51:58 np0005532761 ovn_controller[154148]: 2025-11-23T20:51:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Nov 23 15:51:58 np0005532761 podman[158019]: 2025-11-23 20:51:58.549909885 +0000 UTC m=+0.113657976 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 15:51:58 np0005532761 python3.9[158060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:51:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:58 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:58 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:59 np0005532761 python3.9[158218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931118.225706-506-79206641448893/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:59.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:59 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:51:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:59.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:59 np0005532761 python3.9[158368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:59 : epoch 69237374 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:51:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:59 : epoch 69237374 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:51:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:51:59 : epoch 69237374 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:52:00 np0005532761 python3.9[158489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931119.3596196-506-45280689634431/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:52:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:00 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:00 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:01 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:01 np0005532761 python3.9[158641]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:52:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:01.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:02 np0005532761 python3.9[158795]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:52:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:02 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:02 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:02 : epoch 69237374 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:52:03
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', '.nfs', 'backups', 'cephfs.cephfs.meta', 'vms']
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:52:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:52:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:52:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:03.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:52:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:52:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:03 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:03 np0005532761 python3.9[158949]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:03.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:03 np0005532761 python3.9[159027]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:04 np0005532761 python3.9[159179]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:52:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:04 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:04 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:04 np0005532761 python3.9[159258]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:05 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:05.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:06 np0005532761 python3.9[159411]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:52:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:06 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:06 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:06 np0005532761 python3.9[159566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:07.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:52:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:07.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:52:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:07.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:52:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:07 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:07 np0005532761 python3.9[159645]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:07.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:52:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:52:08 np0005532761 python3.9[159797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:52:08 np0005532761 python3.9[159876]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:08 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205208 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:52:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:08 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:09.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:09 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:09.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:10 np0005532761 python3.9[160029]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:52:10 np0005532761 systemd[1]: Reloading.
Nov 23 15:52:10 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:10 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:52:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:10 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:10 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000054s ======
Nov 23 15:52:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:11.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 23 15:52:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:11 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:11 np0005532761 python3.9[160219]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:11.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:11 np0005532761 python3.9[160297]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:52:12 np0005532761 python3.9[160450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:12 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:12 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:13 np0005532761 python3.9[160529]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:13 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:13.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:14 np0005532761 python3.9[160681]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:52:14 np0005532761 systemd[1]: Reloading.
Nov 23 15:52:14 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:14 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:14 np0005532761 systemd[1]: Starting Create netns directory...
Nov 23 15:52:14 np0005532761 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:52:14 np0005532761 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:52:14 np0005532761 systemd[1]: Finished Create netns directory.
Nov 23 15:52:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:52:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:14 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:14 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:15.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:15 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:15.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:15 np0005532761 python3.9[160878]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:16 np0005532761 python3.9[161030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:52:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:16 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:16 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:17.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:52:17 np0005532761 python3.9[161155]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931136.0732284-959-241581952234436/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:17.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:17 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:17.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:52:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 23 15:52:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:52:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:52:18 np0005532761 python3.9[161307]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:52:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:18 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:18 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:19 np0005532761 python3.9[161484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:19 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:19 np0005532761 python3.9[161609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931138.6902392-1034-144794478928274/.source.json _original_basename=.7de887v0 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:19.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:20 np0005532761 python3.9[161762]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:52:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:20 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:20 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:21 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:21.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:52:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:22 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:22 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f4003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:23.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:23 np0005532761 python3.9[162192]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 23 15:52:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:23 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:23.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:24 np0005532761 python3.9[162346]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 15:52:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:52:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:24 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:24 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:25.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:25 np0005532761 python3.9[162503]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 15:52:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:25 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:52:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:26 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:26 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:27.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:52:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:27.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:27 np0005532761 python3[162684]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 15:52:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:27 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29f8004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:52:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:52:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:27.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:52:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:28 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:28 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:29.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:29 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:29.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:52:30 np0005532761 podman[162747]: 2025-11-23 20:52:30.80249132 +0000 UTC m=+1.865439208 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 15:52:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:30 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:30 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:31.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:31 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:31.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:52:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:32 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:32 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:52:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:52:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:52:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:52:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:52:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:52:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:33.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:33 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:52:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:52:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:52:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:34 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:34 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:35 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:35.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:52:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:36 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:36 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:37.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:52:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:37.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:37 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:37] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:52:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:37] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:52:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:52:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:38 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8001f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:38 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:39.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:39 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:39.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205239 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:52:39 np0005532761 podman[162697]: 2025-11-23 20:52:39.970633216 +0000 UTC m=+12.578741129 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 15:52:40 np0005532761 podman[162965]: 2025-11-23 20:52:40.088452621 +0000 UTC m=+0.022475240 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 15:52:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:52:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:40 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:40 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8001f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:41.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:41 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8001f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:41 np0005532761 podman[162965]: 2025-11-23 20:52:41.429866252 +0000 UTC m=+1.363888821 container create e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 15:52:41 np0005532761 python3[162684]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 15:52:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:52:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:52:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:52:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:52:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:52:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:52:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:42 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:42 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:43.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:43 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e8001f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:52:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:43.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:52:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:52:44 np0005532761 podman[163125]: 2025-11-23 20:52:44.585548193 +0000 UTC m=+0.040049794 container create e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:52:44 np0005532761 systemd[1]: Started libpod-conmon-e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717.scope.
Nov 23 15:52:44 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:52:44 np0005532761 podman[163125]: 2025-11-23 20:52:44.566358424 +0000 UTC m=+0.020860045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:52:44 np0005532761 podman[163125]: 2025-11-23 20:52:44.666048631 +0000 UTC m=+0.120550252 container init e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:52:44 np0005532761 podman[163125]: 2025-11-23 20:52:44.672640172 +0000 UTC m=+0.127141773 container start e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_clarke, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:52:44 np0005532761 boring_clarke[163165]: 167 167
Nov 23 15:52:44 np0005532761 systemd[1]: libpod-e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717.scope: Deactivated successfully.
Nov 23 15:52:44 np0005532761 podman[163125]: 2025-11-23 20:52:44.677538537 +0000 UTC m=+0.132040138 container attach e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:52:44 np0005532761 podman[163125]: 2025-11-23 20:52:44.679040249 +0000 UTC m=+0.133541870 container died e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_clarke, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 15:52:44 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a1a9e1476fd56df0da2e9fc1722ffe55e869744d973274f4ac29b2b95317c458-merged.mount: Deactivated successfully.
Nov 23 15:52:44 np0005532761 podman[163125]: 2025-11-23 20:52:44.733732555 +0000 UTC m=+0.188234156 container remove e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_clarke, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:52:44 np0005532761 systemd[1]: libpod-conmon-e355325a73ccbbca239d12528c6dc2935dbaa3964afbbe5cb3de47bd3dbf8717.scope: Deactivated successfully.
Nov 23 15:52:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 23 15:52:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 23 15:52:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:52:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 23 15:52:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 23 15:52:44 np0005532761 podman[163189]: 2025-11-23 20:52:44.886047301 +0000 UTC m=+0.042393929 container create a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:52:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:44 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 23 15:52:44 np0005532761 systemd[1]: Started libpod-conmon-a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8.scope.
Nov 23 15:52:44 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 23 15:52:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:44 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a10003fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:44 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:52:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b5fc01aa92bf64d0ce463a830f8b2bf94dcfdb7aca6451f1a045ed9ce434c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b5fc01aa92bf64d0ce463a830f8b2bf94dcfdb7aca6451f1a045ed9ce434c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b5fc01aa92bf64d0ce463a830f8b2bf94dcfdb7aca6451f1a045ed9ce434c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b5fc01aa92bf64d0ce463a830f8b2bf94dcfdb7aca6451f1a045ed9ce434c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:44 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b5fc01aa92bf64d0ce463a830f8b2bf94dcfdb7aca6451f1a045ed9ce434c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:44 np0005532761 podman[163189]: 2025-11-23 20:52:44.955792423 +0000 UTC m=+0.112139071 container init a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:52:44 np0005532761 podman[163189]: 2025-11-23 20:52:44.869077384 +0000 UTC m=+0.025424022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:52:44 np0005532761 podman[163189]: 2025-11-23 20:52:44.965402747 +0000 UTC m=+0.121749375 container start a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:52:44 np0005532761 podman[163189]: 2025-11-23 20:52:44.97095046 +0000 UTC m=+0.127297118 container attach a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:52:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:52:45 np0005532761 python3.9[163315]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:52:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:45.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:45 np0005532761 gracious_williams[163241]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:52:45 np0005532761 gracious_williams[163241]: --> All data devices are unavailable
Nov 23 15:52:45 np0005532761 systemd[1]: libpod-a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8.scope: Deactivated successfully.
Nov 23 15:52:45 np0005532761 podman[163189]: 2025-11-23 20:52:45.299872191 +0000 UTC m=+0.456218839 container died a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:52:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a97b5fc01aa92bf64d0ce463a830f8b2bf94dcfdb7aca6451f1a045ed9ce434c-merged.mount: Deactivated successfully.
Nov 23 15:52:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:45 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:45 np0005532761 podman[163189]: 2025-11-23 20:52:45.361741236 +0000 UTC m=+0.518087864 container remove a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:52:45 np0005532761 systemd[1]: libpod-conmon-a5833de5f51c0842f26a4e7c705d75989e9c09775a811820056ac184ee9fe9e8.scope: Deactivated successfully.
Nov 23 15:52:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:45.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:45 np0005532761 podman[163476]: 2025-11-23 20:52:45.887965921 +0000 UTC m=+0.044415134 container create cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:52:45 np0005532761 systemd[1]: Started libpod-conmon-cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485.scope.
Nov 23 15:52:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:52:45 np0005532761 podman[163476]: 2025-11-23 20:52:45.862822988 +0000 UTC m=+0.019272221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:52:45 np0005532761 podman[163476]: 2025-11-23 20:52:45.96345325 +0000 UTC m=+0.119902483 container init cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:52:45 np0005532761 podman[163476]: 2025-11-23 20:52:45.971873492 +0000 UTC m=+0.128322705 container start cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:52:45 np0005532761 hardcore_bhabha[163522]: 167 167
Nov 23 15:52:45 np0005532761 systemd[1]: libpod-cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485.scope: Deactivated successfully.
Nov 23 15:52:45 np0005532761 podman[163476]: 2025-11-23 20:52:45.982589778 +0000 UTC m=+0.139038991 container attach cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:52:45 np0005532761 podman[163476]: 2025-11-23 20:52:45.983141073 +0000 UTC m=+0.139590286 container died cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:52:46 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5e66efce1e18d02d5ad04522856be0ec0625cdf43b77aa947d2627a00420a5b3-merged.mount: Deactivated successfully.
Nov 23 15:52:46 np0005532761 podman[163476]: 2025-11-23 20:52:46.048110333 +0000 UTC m=+0.204559566 container remove cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 15:52:46 np0005532761 systemd[1]: libpod-conmon-cf08f01aef165bf8f8980a2c81d97247432daa0a7468df8352e5505560075485.scope: Deactivated successfully.
Nov 23 15:52:46 np0005532761 podman[163622]: 2025-11-23 20:52:46.217828868 +0000 UTC m=+0.057802163 container create e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_curran, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:52:46 np0005532761 systemd[1]: Started libpod-conmon-e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a.scope.
Nov 23 15:52:46 np0005532761 podman[163622]: 2025-11-23 20:52:46.182128754 +0000 UTC m=+0.022102069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:52:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:52:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0ee56b909a2626b760c290d24b43f43145ddce4196fb549854360d710b14cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0ee56b909a2626b760c290d24b43f43145ddce4196fb549854360d710b14cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0ee56b909a2626b760c290d24b43f43145ddce4196fb549854360d710b14cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0ee56b909a2626b760c290d24b43f43145ddce4196fb549854360d710b14cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:46 np0005532761 python3.9[163616]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:46 np0005532761 podman[163622]: 2025-11-23 20:52:46.328125517 +0000 UTC m=+0.168098822 container init e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 15:52:46 np0005532761 podman[163622]: 2025-11-23 20:52:46.334828361 +0000 UTC m=+0.174801656 container start e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:52:46 np0005532761 podman[163622]: 2025-11-23 20:52:46.349905436 +0000 UTC m=+0.189878821 container attach e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:52:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:46 np0005532761 epic_curran[163639]: {
Nov 23 15:52:46 np0005532761 epic_curran[163639]:    "1": [
Nov 23 15:52:46 np0005532761 epic_curran[163639]:        {
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "devices": [
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "/dev/loop3"
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            ],
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "lv_name": "ceph_lv0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "lv_size": "21470642176",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "name": "ceph_lv0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "tags": {
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.cluster_name": "ceph",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.crush_device_class": "",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.encrypted": "0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.osd_id": "1",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.type": "block",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.vdo": "0",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:                "ceph.with_tpm": "0"
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            },
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "type": "block",
Nov 23 15:52:46 np0005532761 epic_curran[163639]:            "vg_name": "ceph_vg0"
Nov 23 15:52:46 np0005532761 epic_curran[163639]:        }
Nov 23 15:52:46 np0005532761 epic_curran[163639]:    ]
Nov 23 15:52:46 np0005532761 epic_curran[163639]: }
Nov 23 15:52:46 np0005532761 systemd[1]: libpod-e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a.scope: Deactivated successfully.
Nov 23 15:52:46 np0005532761 podman[163622]: 2025-11-23 20:52:46.624572872 +0000 UTC m=+0.464546167 container died e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:52:46 np0005532761 systemd[1]: var-lib-containers-storage-overlay-bf0ee56b909a2626b760c290d24b43f43145ddce4196fb549854360d710b14cd-merged.mount: Deactivated successfully.
Nov 23 15:52:46 np0005532761 podman[163622]: 2025-11-23 20:52:46.705779519 +0000 UTC m=+0.545752814 container remove e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_curran, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:52:46 np0005532761 systemd[1]: libpod-conmon-e43637ed75e17b7bf24cc144d00a0a3270a321e46f5ce1fcd1980a827e16c92a.scope: Deactivated successfully.
Nov 23 15:52:46 np0005532761 python3.9[163722]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:52:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:52:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:46 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e80041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:46 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:47.017Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:52:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:47.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:52:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:47.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:52:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:47.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:47 np0005532761 podman[163982]: 2025-11-23 20:52:47.28044981 +0000 UTC m=+0.050690937 container create 5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:52:47 np0005532761 systemd[1]: Started libpod-conmon-5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6.scope.
Nov 23 15:52:47 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:52:47 np0005532761 podman[163982]: 2025-11-23 20:52:47.25829473 +0000 UTC m=+0.028535877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:52:47 np0005532761 podman[163982]: 2025-11-23 20:52:47.358734177 +0000 UTC m=+0.128975324 container init 5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 15:52:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:47 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:47 np0005532761 podman[163982]: 2025-11-23 20:52:47.367864438 +0000 UTC m=+0.138105565 container start 5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:52:47 np0005532761 podman[163982]: 2025-11-23 20:52:47.374340256 +0000 UTC m=+0.144581403 container attach 5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:52:47 np0005532761 relaxed_cray[163999]: 167 167
Nov 23 15:52:47 np0005532761 systemd[1]: libpod-5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6.scope: Deactivated successfully.
Nov 23 15:52:47 np0005532761 podman[163982]: 2025-11-23 20:52:47.376982599 +0000 UTC m=+0.147223756 container died 5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:52:47 np0005532761 python3.9[163981]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931166.8000965-1298-193603088961317/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0cc2021705be109182fd0bd1110cd5e37116e1d949340c0020016b89bf0c741d-merged.mount: Deactivated successfully.
Nov 23 15:52:47 np0005532761 podman[163982]: 2025-11-23 20:52:47.430255057 +0000 UTC m=+0.200496184 container remove 5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:52:47 np0005532761 systemd[1]: libpod-conmon-5a9dfe935ff7f3c4feea8059023ba604e5c83e1b23d053fdbd50c14e3bad88a6.scope: Deactivated successfully.
Nov 23 15:52:47 np0005532761 podman[164047]: 2025-11-23 20:52:47.596359403 +0000 UTC m=+0.043306354 container create b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Nov 23 15:52:47 np0005532761 systemd[1]: Started libpod-conmon-b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5.scope.
Nov 23 15:52:47 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:52:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832e73e4546d1bbba09ae22d6fd67df20709737693d6804efe30058e3b8b6c85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832e73e4546d1bbba09ae22d6fd67df20709737693d6804efe30058e3b8b6c85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832e73e4546d1bbba09ae22d6fd67df20709737693d6804efe30058e3b8b6c85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832e73e4546d1bbba09ae22d6fd67df20709737693d6804efe30058e3b8b6c85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:47 np0005532761 podman[164047]: 2025-11-23 20:52:47.577922364 +0000 UTC m=+0.024869325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:52:47 np0005532761 podman[164047]: 2025-11-23 20:52:47.691204605 +0000 UTC m=+0.138151566 container init b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 15:52:47 np0005532761 podman[164047]: 2025-11-23 20:52:47.698668421 +0000 UTC m=+0.145615362 container start b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:52:47 np0005532761 podman[164047]: 2025-11-23 20:52:47.702383663 +0000 UTC m=+0.149330614 container attach b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_black, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:52:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:47] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:52:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:47] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Nov 23 15:52:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:47.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:47 np0005532761 python3.9[164119]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:52:47 np0005532761 systemd[1]: Reloading.
Nov 23 15:52:48 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:48 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:52:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:52:48 np0005532761 lvm[164225]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:52:48 np0005532761 lvm[164225]: VG ceph_vg0 finished
Nov 23 15:52:48 np0005532761 elegant_black[164104]: {}
Nov 23 15:52:48 np0005532761 systemd[1]: libpod-b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5.scope: Deactivated successfully.
Nov 23 15:52:48 np0005532761 podman[164047]: 2025-11-23 20:52:48.476336363 +0000 UTC m=+0.923283304 container died b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_black, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 15:52:48 np0005532761 systemd[1]: libpod-b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5.scope: Consumed 1.146s CPU time.
Nov 23 15:52:48 np0005532761 systemd[1]: var-lib-containers-storage-overlay-832e73e4546d1bbba09ae22d6fd67df20709737693d6804efe30058e3b8b6c85-merged.mount: Deactivated successfully.
Nov 23 15:52:48 np0005532761 podman[164047]: 2025-11-23 20:52:48.518001341 +0000 UTC m=+0.964948272 container remove b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_black, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:52:48 np0005532761 systemd[1]: libpod-conmon-b9dc50c30c390febd9b164da580bc2eb17ff3ff80c11f72026888a72d85592a5.scope: Deactivated successfully.
Nov 23 15:52:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:52:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:52:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:52:48 np0005532761 python3.9[164317]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:52:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:48 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:48 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a18001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:48 np0005532761 systemd[1]: Reloading.
Nov 23 15:52:49 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:49 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:49.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:49 np0005532761 systemd[1]: Starting ovn_metadata_agent container...
Nov 23 15:52:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:49 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:49 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:52:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63fedd872d99d35d1c385c7d09b781b4dbf0538df42d29281c00e2d81cbd8e5/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63fedd872d99d35d1c385c7d09b781b4dbf0538df42d29281c00e2d81cbd8e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:49 np0005532761 systemd[1]: Started /usr/bin/podman healthcheck run e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4.
Nov 23 15:52:49 np0005532761 podman[164384]: 2025-11-23 20:52:49.453191653 +0000 UTC m=+0.155754282 container init e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 15:52:49 np0005532761 podman[164384]: 2025-11-23 20:52:49.47995976 +0000 UTC m=+0.182522359 container start e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + sudo -E kolla_set_configs
Nov 23 15:52:49 np0005532761 edpm-start-podman-container[164384]: ovn_metadata_agent
Nov 23 15:52:49 np0005532761 edpm-start-podman-container[164383]: Creating additional drop-in dependency for "ovn_metadata_agent" (e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4)
Nov 23 15:52:49 np0005532761 podman[164404]: 2025-11-23 20:52:49.580405918 +0000 UTC m=+0.091644596 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 15:52:49 np0005532761 systemd[1]: Reloading.
Nov 23 15:52:49 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Validating config file
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Copying service configuration files
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Writing out command to execute
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 15:52:49 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: ++ cat /run_command
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + CMD=neutron-ovn-metadata-agent
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + ARGS=
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + sudo kolla_copy_cacerts
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + [[ ! -n '' ]]
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + . kolla_extend_start
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: Running command: 'neutron-ovn-metadata-agent'
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + umask 0022
Nov 23 15:52:49 np0005532761 ovn_metadata_agent[164399]: + exec neutron-ovn-metadata-agent
Nov 23 15:52:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:49 np0005532761 systemd[1]: Started ovn_metadata_agent container.
Nov 23 15:52:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 15:52:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:50 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e80041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:50 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29e80041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:51 np0005532761 systemd[1]: session-52.scope: Deactivated successfully.
Nov 23 15:52:51 np0005532761 systemd[1]: session-52.scope: Consumed 52.300s CPU time.
Nov 23 15:52:51 np0005532761 systemd-logind[820]: Session 52 logged out. Waiting for processes to exit.
Nov 23 15:52:51 np0005532761 systemd-logind[820]: Removed session 52.
Nov 23 15:52:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:51.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:51 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a18001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.801 164405 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.801 164405 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.801 164405 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.802 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.802 164405 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.802 164405 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.802 164405 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.802 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.803 164405 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.803 164405 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.803 164405 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.803 164405 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.803 164405 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.803 164405 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.803 164405 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.804 164405 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.805 164405 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.805 164405 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.805 164405 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.805 164405 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.805 164405 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.805 164405 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.805 164405 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.806 164405 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.807 164405 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.808 164405 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.809 164405 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.810 164405 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.811 164405 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.812 164405 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.813 164405 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.814 164405 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.815 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.816 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.817 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.817 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.817 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.817 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.817 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.817 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.817 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.818 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.819 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.820 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.821 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.822 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.823 164405 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.824 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.825 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.826 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.827 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.828 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.829 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.830 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.830 164405 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.830 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.830 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.830 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.830 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.830 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.831 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.832 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.833 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.834 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.835 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.835 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.835 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.835 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.835 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.835 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.835 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.836 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.837 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.837 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.837 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.837 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.837 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.837 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.837 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.838 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.838 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.838 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.838 164405 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.838 164405 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.846 164405 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.846 164405 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.846 164405 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.847 164405 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.847 164405 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.861 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name fa015a79-13cd-4722-b3c7-7f2e111a2432 (UUID: fa015a79-13cd-4722-b3c7-7f2e111a2432) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.882 164405 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.883 164405 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.883 164405 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.883 164405 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.886 164405 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.892 164405 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.897 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'fa015a79-13cd-4722-b3c7-7f2e111a2432'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f0e60155700>], external_ids={}, name=fa015a79-13cd-4722-b3c7-7f2e111a2432, nb_cfg_timestamp=1763931096543, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.898 164405 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f0e60155a00>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.899 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.899 164405 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.899 164405 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.899 164405 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.903 164405 DEBUG oslo_service.service [-] Started child 164516 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.907 164405 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpgioglv0e/privsep.sock']#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.908 164516 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-169352'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.935 164516 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.936 164516 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.936 164516 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.939 164516 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.946 164516 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 23 15:52:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:51.954 164516 INFO eventlet.wsgi.server [-] (164516) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 23 15:52:52 np0005532761 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 23 15:52:52 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:52.539 164405 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 15:52:52 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:52.540 164405 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgioglv0e/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 23 15:52:52 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:52.426 164524 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 15:52:52 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:52.432 164524 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 15:52:52 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:52.435 164524 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 23 15:52:52 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:52.435 164524 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164524#033[00m
Nov 23 15:52:52 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:52.543 164524 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1b171a-5404-4e94-9ea4-f77bd722f367]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 15:52:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 15:52:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:52 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a1c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:52 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a180032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.050 164524 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.050 164524 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.050 164524 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:52:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:53.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:53 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.562 164524 DEBUG oslo.privsep.daemon [-] privsep: reply[c563cd6a-30b3-4cf8-b170-4290e4b5732d]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.564 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, column=external_ids, values=({'neutron:ovn-metadata-id': '7b03f298-e32e-5aa3-81de-b00db8c70a97'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.576 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.617 164405 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.617 164405 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.617 164405 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.617 164405 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.617 164405 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.617 164405 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.617 164405 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.618 164405 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.619 164405 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.619 164405 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.619 164405 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.619 164405 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.619 164405 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.619 164405 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.619 164405 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.620 164405 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.620 164405 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.620 164405 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.620 164405 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.620 164405 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.620 164405 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.621 164405 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.622 164405 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.623 164405 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.624 164405 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.625 164405 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.626 164405 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.627 164405 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.628 164405 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.629 164405 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.630 164405 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.631 164405 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.632 164405 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.633 164405 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.634 164405 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.635 164405 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.636 164405 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.637 164405 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.638 164405 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.638 164405 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.638 164405 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.638 164405 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.638 164405 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.638 164405 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.638 164405 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.639 164405 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.640 164405 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.641 164405 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.642 164405 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.643 164405 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.644 164405 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.645 164405 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.646 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.647 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.648 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.649 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.649 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.649 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.649 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.649 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.649 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.649 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:53 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:52:53.650 164405 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 15:52:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 15:52:54 np0005532761 kernel: ganesha.nfsd[148948]: segfault at 50 ip 00007f2acc33732e sp 00007f2a8cff8210 error 4 in libntirpc.so.5.8[7f2acc31c000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 15:52:54 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:52:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[142140]: 23/11/2025 20:52:54 : epoch 69237374 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29ec004050 fd 39 proxy ignored for local
Nov 23 15:52:54 np0005532761 systemd[1]: Started Process Core Dump (PID 164532/UID 0).
Nov 23 15:52:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:55.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:55.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:56 np0005532761 systemd-logind[820]: New session 53 of user zuul.
Nov 23 15:52:56 np0005532761 systemd[1]: Started Session 53 of User zuul.
Nov 23 15:52:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 15:52:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:52:57.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:52:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:57.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:57 np0005532761 python3.9[164689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:52:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:57] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Nov 23 15:52:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:52:57] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Nov 23 15:52:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:57.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:57 np0005532761 systemd-coredump[164533]: Process 142152 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 57:#012#0  0x00007f2acc33732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:52:57 np0005532761 systemd[1]: systemd-coredump@4-164532-0.service: Deactivated successfully.
Nov 23 15:52:57 np0005532761 systemd[1]: systemd-coredump@4-164532-0.service: Consumed 1.177s CPU time.
Nov 23 15:52:58 np0005532761 podman[164724]: 2025-11-23 20:52:58.043539823 +0000 UTC m=+0.025189505 container died 54b1614ab5ca6d906b414bcdf51f1e0562e28dacc02e62bfd8c67a1a7f46cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:52:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-476171b081ef26795da06e70ed6e39b57c960a61697d79c2e4d6df0e734cb32d-merged.mount: Deactivated successfully.
Nov 23 15:52:58 np0005532761 podman[164724]: 2025-11-23 20:52:58.754360394 +0000 UTC m=+0.736010056 container remove 54b1614ab5ca6d906b414bcdf51f1e0562e28dacc02e62bfd8c67a1a7f46cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:52:58 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:52:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 15:52:58 np0005532761 python3.9[164867]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:52:58 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:52:58 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.689s CPU time.
Nov 23 15:52:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:59.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:52:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:53:00 np0005532761 python3.9[165086]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:53:00 np0005532761 systemd[1]: Reloading.
Nov 23 15:53:00 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:53:00 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:53:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 426 B/s wr, 169 op/s
Nov 23 15:53:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:53:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:53:01 np0005532761 python3.9[165274]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:53:01 np0005532761 network[165291]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:53:01 np0005532761 network[165292]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:53:01 np0005532761 network[165293]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:53:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 23 15:53:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205302 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:53:03
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['vms', '.nfs', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:53:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:53:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:53:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:53:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:53:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:03.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:53:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:53:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:03.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:53:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:05.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:53:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:05.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:53:06 np0005532761 podman[165435]: 2025-11-23 20:53:06.597709885 +0000 UTC m=+0.117194038 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 15:53:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:53:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205306 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:53:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [NOTICE] 326/205306 (4) : haproxy version is 2.3.17-d1c9119
Nov 23 15:53:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [NOTICE] 326/205306 (4) : path to executable is /usr/local/sbin/haproxy
Nov 23 15:53:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [ALERT] 326/205306 (4) : backend 'backend' has no server available!
Nov 23 15:53:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:53:07.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:53:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:53:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:07.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:53:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:53:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:53:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:07.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:08 np0005532761 python3.9[165589]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:53:08 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 5.
Nov 23 15:53:08 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:53:08 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.689s CPU time.
Nov 23 15:53:08 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:53:09 np0005532761 podman[165762]: 2025-11-23 20:53:09.172592156 +0000 UTC m=+0.040156657 container create 73942869ed71004e8cda797d8d918561f55cdcb404521a611ac1e6dda6b175a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:53:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6731e09c44e2c308e4d3af5adfbb7ac3ec3c6508677179371a44db14348867/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6731e09c44e2c308e4d3af5adfbb7ac3ec3c6508677179371a44db14348867/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6731e09c44e2c308e4d3af5adfbb7ac3ec3c6508677179371a44db14348867/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6731e09c44e2c308e4d3af5adfbb7ac3ec3c6508677179371a44db14348867/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:09 np0005532761 podman[165762]: 2025-11-23 20:53:09.241708701 +0000 UTC m=+0.109273222 container init 73942869ed71004e8cda797d8d918561f55cdcb404521a611ac1e6dda6b175a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 15:53:09 np0005532761 podman[165762]: 2025-11-23 20:53:09.246101481 +0000 UTC m=+0.113665982 container start 73942869ed71004e8cda797d8d918561f55cdcb404521a611ac1e6dda6b175a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:53:09 np0005532761 bash[165762]: 73942869ed71004e8cda797d8d918561f55cdcb404521a611ac1e6dda6b175a0
Nov 23 15:53:09 np0005532761 podman[165762]: 2025-11-23 20:53:09.15421535 +0000 UTC m=+0.021779871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:53:09 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:53:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:53:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:09.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:53:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:09 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:09 np0005532761 python3.9[165803]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:09.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:10 np0005532761 python3.9[166000]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:53:11 np0005532761 python3.9[166154]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:11.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:11 np0005532761 python3.9[166308]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:12 np0005532761 python3.9[166461]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 23 15:53:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:13.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:13 np0005532761 python3.9[166618]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:13 np0005532761 auditd[702]: Audit daemon rotating log files
Nov 23 15:53:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:13.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 15:53:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:15.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:15 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:15.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:53:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:53:17.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:53:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:17.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:17 np0005532761 python3.9[166775]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:53:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:53:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:17.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205317 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:53:18 np0005532761 python3.9[166927]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:53:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:53:18 np0005532761 python3.9[167080]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:53:19 np0005532761 python3.9[167233]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:19 np0005532761 python3.9[167410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:19.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:20 np0005532761 podman[167534]: 2025-11-23 20:53:20.164371054 +0000 UTC m=+0.087995105 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 23 15:53:20 np0005532761 python3.9[167581]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 23 15:53:20 np0005532761 python3.9[167736]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:21.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000e:nfs.cephfs.2: -2
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:53:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:21 : epoch 69237435 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:21.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 23 15:53:22 np0005532761 python3.9[167901]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:22 np0005532761 kernel: ganesha.nfsd[167930]: segfault at 50 ip 00007fa5be4e832e sp 00007fa580ff8210 error 4 in libntirpc.so.5.8[7fa5be4cd000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 23 15:53:22 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:53:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[165806]: 23/11/2025 20:53:22 : epoch 69237435 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514000df0 fd 38 proxy ignored for local
Nov 23 15:53:22 np0005532761 systemd[1]: Started Process Core Dump (PID 167934/UID 0).
Nov 23 15:53:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:23.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:23 np0005532761 python3.9[168059]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:23.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:23 np0005532761 systemd-coredump[167944]: Process 165810 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 51:#012#0  0x00007fa5be4e832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:53:23 np0005532761 systemd[1]: systemd-coredump@5-167934-0.service: Deactivated successfully.
Nov 23 15:53:23 np0005532761 python3.9[168211]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:23 np0005532761 podman[168216]: 2025-11-23 20:53:23.987019309 +0000 UTC m=+0.027816000 container died 73942869ed71004e8cda797d8d918561f55cdcb404521a611ac1e6dda6b175a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:53:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4d6731e09c44e2c308e4d3af5adfbb7ac3ec3c6508677179371a44db14348867-merged.mount: Deactivated successfully.
Nov 23 15:53:24 np0005532761 podman[168216]: 2025-11-23 20:53:24.041198139 +0000 UTC m=+0.081994810 container remove 73942869ed71004e8cda797d8d918561f55cdcb404521a611ac1e6dda6b175a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 15:53:24 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:53:24 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:53:24 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.125s CPU time.
Nov 23 15:53:24 np0005532761 python3.9[168410]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Nov 23 15:53:25 np0005532761 python3.9[168564]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:25.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:25 np0005532761 python3.9[168716]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:26 np0005532761 python3.9[168868]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:53:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:53:27.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:53:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:27.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:27] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:53:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:27] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:53:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:27.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:53:29 np0005532761 python3.9[169023]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:29.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:29.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:29 np0005532761 python3.9[169176]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:53:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 23 15:53:31 np0005532761 python3.9[169329]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:53:31 np0005532761 systemd[1]: Reloading.
Nov 23 15:53:31 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:53:31 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:53:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:31.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205331 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:53:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:32 np0005532761 python3.9[169517]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:32 np0005532761 python3.9[169671]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 511 B/s wr, 1 op/s
Nov 23 15:53:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:53:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:53:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:53:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:53:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:53:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:53:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:53:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:53:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:33.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:33 np0005532761 python3.9[169825]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:34 np0005532761 python3.9[169978]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:34 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 6.
Nov 23 15:53:34 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:53:34 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.125s CPU time.
Nov 23 15:53:34 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:53:34 np0005532761 podman[170120]: 2025-11-23 20:53:34.425951282 +0000 UTC m=+0.041856313 container create 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:53:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39f8371b014f6e2159fd72ec274ec402143fc7178fdbdfb90e260ce1d38820c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39f8371b014f6e2159fd72ec274ec402143fc7178fdbdfb90e260ce1d38820c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39f8371b014f6e2159fd72ec274ec402143fc7178fdbdfb90e260ce1d38820c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39f8371b014f6e2159fd72ec274ec402143fc7178fdbdfb90e260ce1d38820c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:34 np0005532761 podman[170120]: 2025-11-23 20:53:34.402912166 +0000 UTC m=+0.018817207 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:34 np0005532761 podman[170120]: 2025-11-23 20:53:34.512474569 +0000 UTC m=+0.128379610 container init 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:53:34 np0005532761 podman[170120]: 2025-11-23 20:53:34.518606397 +0000 UTC m=+0.134511428 container start 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:53:34 np0005532761 bash[170120]: 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:53:34 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:53:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:34 np0005532761 python3.9[170199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 511 B/s wr, 1 op/s
Nov 23 15:53:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:35 np0005532761 python3.9[170390]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:35.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:35 np0005532761 python3.9[170543]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:53:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:53:37.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:53:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:53:37.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:53:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:37.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:37 np0005532761 podman[170571]: 2025-11-23 20:53:37.616717998 +0000 UTC m=+0.132309401 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:53:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 15:53:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 15:53:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:37.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:53:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:40 np0005532761 python3.9[170752]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 23 15:53:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:40 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:40 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 15:53:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:41 np0005532761 python3.9[170907]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:53:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:41.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:53:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:43.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:43 np0005532761 python3.9[171071]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 15:53:43 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:53:43 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:53:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:43.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:44 np0005532761 python3.9[171233]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:53:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:53:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:45.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:45 np0005532761 python3.9[171318]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:53:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:45.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:53:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:53:47.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:53:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:47.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 15:53:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 15:53:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:53:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:53:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:53:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205348 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:53:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:48 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:53:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:49.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:49 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:53:50 np0005532761 podman[171520]: 2025-11-23 20:53:50.356938067 +0000 UTC m=+0.044457240 container create 62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:53:50 np0005532761 systemd[1]: Started libpod-conmon-62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809.scope.
Nov 23 15:53:50 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:53:50 np0005532761 podman[171520]: 2025-11-23 20:53:50.333547983 +0000 UTC m=+0.021067166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:50 np0005532761 podman[171520]: 2025-11-23 20:53:50.437614123 +0000 UTC m=+0.125133306 container init 62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 15:53:50 np0005532761 podman[171520]: 2025-11-23 20:53:50.44485602 +0000 UTC m=+0.132375183 container start 62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_meitner, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:53:50 np0005532761 competent_meitner[171538]: 167 167
Nov 23 15:53:50 np0005532761 podman[171520]: 2025-11-23 20:53:50.449982482 +0000 UTC m=+0.137501705 container attach 62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:53:50 np0005532761 systemd[1]: libpod-62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809.scope: Deactivated successfully.
Nov 23 15:53:50 np0005532761 podman[171520]: 2025-11-23 20:53:50.450919806 +0000 UTC m=+0.138438969 container died 62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 15:53:50 np0005532761 podman[171534]: 2025-11-23 20:53:50.469450465 +0000 UTC m=+0.071402006 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 23 15:53:50 np0005532761 systemd[1]: var-lib-containers-storage-overlay-023306e0bc2f0e0bf0738851157393a2ae48a3157a7c60d858ee63c58d25fc3f-merged.mount: Deactivated successfully.
Nov 23 15:53:50 np0005532761 podman[171520]: 2025-11-23 20:53:50.507851258 +0000 UTC m=+0.195370421 container remove 62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_meitner, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 15:53:50 np0005532761 systemd[1]: libpod-conmon-62c6469c5c3e04e4a6b4a909b8dea2ebcabe4c58e98dbf0c610590c62f957809.scope: Deactivated successfully.
Nov 23 15:53:50 np0005532761 podman[171580]: 2025-11-23 20:53:50.657283469 +0000 UTC m=+0.039863610 container create 177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:53:50 np0005532761 systemd[1]: Started libpod-conmon-177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a.scope.
Nov 23 15:53:50 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:53:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8844739fd3874bd3f34dc6e775e62d951d94b3c9c692beb6624fcf80df554b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8844739fd3874bd3f34dc6e775e62d951d94b3c9c692beb6624fcf80df554b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:50 np0005532761 podman[171580]: 2025-11-23 20:53:50.639668745 +0000 UTC m=+0.022248906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8844739fd3874bd3f34dc6e775e62d951d94b3c9c692beb6624fcf80df554b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8844739fd3874bd3f34dc6e775e62d951d94b3c9c692beb6624fcf80df554b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8844739fd3874bd3f34dc6e775e62d951d94b3c9c692beb6624fcf80df554b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:50 np0005532761 podman[171580]: 2025-11-23 20:53:50.751445104 +0000 UTC m=+0.134025325 container init 177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 15:53:50 np0005532761 podman[171580]: 2025-11-23 20:53:50.761932604 +0000 UTC m=+0.144512735 container start 177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 15:53:50 np0005532761 podman[171580]: 2025-11-23 20:53:50.766733179 +0000 UTC m=+0.149313310 container attach 177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 15:53:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:53:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:50 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:51 np0005532761 reverent_albattani[171597]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:53:51 np0005532761 reverent_albattani[171597]: --> All data devices are unavailable
Nov 23 15:53:51 np0005532761 systemd[1]: libpod-177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a.scope: Deactivated successfully.
Nov 23 15:53:51 np0005532761 podman[171580]: 2025-11-23 20:53:51.08795924 +0000 UTC m=+0.470539371 container died 177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:53:51 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8c8844739fd3874bd3f34dc6e775e62d951d94b3c9c692beb6624fcf80df554b-merged.mount: Deactivated successfully.
Nov 23 15:53:51 np0005532761 podman[171580]: 2025-11-23 20:53:51.171582322 +0000 UTC m=+0.554162453 container remove 177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 15:53:51 np0005532761 systemd[1]: libpod-conmon-177fc2d8c6cbb049d65ca93562d2346830e1b2517dbaa574d0a76c844dacf88a.scope: Deactivated successfully.
Nov 23 15:53:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:51.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:51 np0005532761 podman[171719]: 2025-11-23 20:53:51.804269164 +0000 UTC m=+0.052670373 container create 9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_ramanujan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:53:51 np0005532761 systemd[1]: Started libpod-conmon-9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f.scope.
Nov 23 15:53:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:53:51.848 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:53:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:53:51.849 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:53:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:53:51.850 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:53:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:51.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:51 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:53:51 np0005532761 podman[171719]: 2025-11-23 20:53:51.782377758 +0000 UTC m=+0.030779007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:51 np0005532761 podman[171719]: 2025-11-23 20:53:51.903348814 +0000 UTC m=+0.151750103 container init 9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 15:53:51 np0005532761 podman[171719]: 2025-11-23 20:53:51.915918269 +0000 UTC m=+0.164319518 container start 9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_ramanujan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 15:53:51 np0005532761 jovial_ramanujan[171735]: 167 167
Nov 23 15:53:51 np0005532761 systemd[1]: libpod-9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f.scope: Deactivated successfully.
Nov 23 15:53:51 np0005532761 conmon[171735]: conmon 9281dd09eb1c35b9c88e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f.scope/container/memory.events
Nov 23 15:53:51 np0005532761 podman[171719]: 2025-11-23 20:53:51.929530691 +0000 UTC m=+0.177931950 container attach 9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:53:51 np0005532761 podman[171719]: 2025-11-23 20:53:51.930391433 +0000 UTC m=+0.178792692 container died 9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:53:51 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a6a7b7d62ab27bba23ec276c9ffd1f235c2e30459512cf66f86787eb5c46e618-merged.mount: Deactivated successfully.
Nov 23 15:53:52 np0005532761 podman[171719]: 2025-11-23 20:53:52.018289404 +0000 UTC m=+0.266690643 container remove 9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:53:52 np0005532761 systemd[1]: libpod-conmon-9281dd09eb1c35b9c88e46b9dd690696085b0ac2f70852313b5822bc6060d05f.scope: Deactivated successfully.
Nov 23 15:53:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:52 np0005532761 podman[171760]: 2025-11-23 20:53:52.164441612 +0000 UTC m=+0.023049117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:52 np0005532761 podman[171760]: 2025-11-23 20:53:52.282240247 +0000 UTC m=+0.140847702 container create 3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:53:52 np0005532761 systemd[1]: Started libpod-conmon-3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4.scope.
Nov 23 15:53:52 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:53:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c0c0f319647faa692f41628ec81d38b4c1807358c405d5443347b91dd72dd80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c0c0f319647faa692f41628ec81d38b4c1807358c405d5443347b91dd72dd80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c0c0f319647faa692f41628ec81d38b4c1807358c405d5443347b91dd72dd80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c0c0f319647faa692f41628ec81d38b4c1807358c405d5443347b91dd72dd80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:52 np0005532761 podman[171760]: 2025-11-23 20:53:52.376316118 +0000 UTC m=+0.234923593 container init 3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 15:53:52 np0005532761 podman[171760]: 2025-11-23 20:53:52.38413783 +0000 UTC m=+0.242745285 container start 3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:53:52 np0005532761 podman[171760]: 2025-11-23 20:53:52.390393192 +0000 UTC m=+0.249000667 container attach 3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]: {
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:    "1": [
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:        {
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "devices": [
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "/dev/loop3"
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            ],
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "lv_name": "ceph_lv0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "lv_size": "21470642176",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "name": "ceph_lv0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "tags": {
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.cluster_name": "ceph",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.crush_device_class": "",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.encrypted": "0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.osd_id": "1",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.type": "block",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.vdo": "0",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:                "ceph.with_tpm": "0"
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            },
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "type": "block",
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:            "vg_name": "ceph_vg0"
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:        }
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]:    ]
Nov 23 15:53:52 np0005532761 cranky_montalcini[171776]: }
Nov 23 15:53:52 np0005532761 podman[171760]: 2025-11-23 20:53:52.662108854 +0000 UTC m=+0.520716309 container died 3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:53:52 np0005532761 systemd[1]: libpod-3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4.scope: Deactivated successfully.
Nov 23 15:53:52 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5c0c0f319647faa692f41628ec81d38b4c1807358c405d5443347b91dd72dd80-merged.mount: Deactivated successfully.
Nov 23 15:53:52 np0005532761 podman[171760]: 2025-11-23 20:53:52.725043891 +0000 UTC m=+0.583651346 container remove 3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:53:52 np0005532761 systemd[1]: libpod-conmon-3065c3f90a5048d392b8d9a2a25de502a8c594f5588962f1865cfea3833f11c4.scope: Deactivated successfully.
Nov 23 15:53:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:53:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:52 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:53 np0005532761 podman[171891]: 2025-11-23 20:53:53.295583017 +0000 UTC m=+0.037955712 container create a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:53:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:53.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:53 np0005532761 systemd[1]: Started libpod-conmon-a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735.scope.
Nov 23 15:53:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:53:53 np0005532761 podman[171891]: 2025-11-23 20:53:53.280792404 +0000 UTC m=+0.023165109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:53 np0005532761 podman[171891]: 2025-11-23 20:53:53.376657222 +0000 UTC m=+0.119029937 container init a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:53:53 np0005532761 podman[171891]: 2025-11-23 20:53:53.382408801 +0000 UTC m=+0.124781496 container start a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:53:53 np0005532761 podman[171891]: 2025-11-23 20:53:53.387475921 +0000 UTC m=+0.129848646 container attach a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:53:53 np0005532761 vibrant_lamarr[171907]: 167 167
Nov 23 15:53:53 np0005532761 systemd[1]: libpod-a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735.scope: Deactivated successfully.
Nov 23 15:53:53 np0005532761 podman[171891]: 2025-11-23 20:53:53.390942311 +0000 UTC m=+0.133315006 container died a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 15:53:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2345aa066de79785c015737171797233944e4bf3592d014a9c44b1c170ef9dda-merged.mount: Deactivated successfully.
Nov 23 15:53:53 np0005532761 podman[171891]: 2025-11-23 20:53:53.453379454 +0000 UTC m=+0.195752169 container remove a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:53:53 np0005532761 systemd[1]: libpod-conmon-a7a63a27d9f1e497a6d526c5e41bccc93717a8f75be0b40e9abf2b868c996735.scope: Deactivated successfully.
Nov 23 15:53:53 np0005532761 podman[171932]: 2025-11-23 20:53:53.639693019 +0000 UTC m=+0.053063391 container create b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mayer, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 15:53:53 np0005532761 podman[171932]: 2025-11-23 20:53:53.617157507 +0000 UTC m=+0.030527909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:53:53 np0005532761 systemd[1]: Started libpod-conmon-b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529.scope.
Nov 23 15:53:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:53:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721fab429482126b43c76f6440b49bf78ade9c47422f7e73886e895099495a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721fab429482126b43c76f6440b49bf78ade9c47422f7e73886e895099495a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721fab429482126b43c76f6440b49bf78ade9c47422f7e73886e895099495a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721fab429482126b43c76f6440b49bf78ade9c47422f7e73886e895099495a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:53:53 np0005532761 podman[171932]: 2025-11-23 20:53:53.858520665 +0000 UTC m=+0.271891117 container init b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mayer, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:53:53 np0005532761 podman[171932]: 2025-11-23 20:53:53.866001768 +0000 UTC m=+0.279372120 container start b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mayer, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:53:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:53 np0005532761 podman[171932]: 2025-11-23 20:53:53.875672378 +0000 UTC m=+0.289042751 container attach b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:53:54 np0005532761 lvm[172026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:53:54 np0005532761 lvm[172026]: VG ceph_vg0 finished
Nov 23 15:53:54 np0005532761 charming_mayer[171948]: {}
Nov 23 15:53:54 np0005532761 systemd[1]: libpod-b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529.scope: Deactivated successfully.
Nov 23 15:53:54 np0005532761 systemd[1]: libpod-b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529.scope: Consumed 1.158s CPU time.
Nov 23 15:53:54 np0005532761 podman[171932]: 2025-11-23 20:53:54.603985171 +0000 UTC m=+1.017355523 container died b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:53:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1721fab429482126b43c76f6440b49bf78ade9c47422f7e73886e895099495a3-merged.mount: Deactivated successfully.
Nov 23 15:53:54 np0005532761 podman[171932]: 2025-11-23 20:53:54.667898484 +0000 UTC m=+1.081268836 container remove b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mayer, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:53:54 np0005532761 systemd[1]: libpod-conmon-b7e27135fd3ead4126104aac23bbacef0325bbad69af6bd0d426643ba3c10529.scope: Deactivated successfully.
Nov 23 15:53:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:53:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:53:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:53:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:54 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:55.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:55 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:55 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:55.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:53:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:53:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:56 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:53:57.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:53:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:57.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:57] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 23 15:53:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:53:57] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 23 15:53:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:57.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:53:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:58 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:53:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:59.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:53:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:53:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:53:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:54:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:00 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:01.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:54:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:02 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:54:03
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', '.nfs', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:54:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:54:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:54:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:54:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:03.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:54:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:04 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:05 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 15:54:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:54:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:05.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:54:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:06 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:07.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:54:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:07.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:54:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:07.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:54:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:07.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:07] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:54:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:07] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:54:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:08 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:54:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:08 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:08 np0005532761 podman[172285]: 2025-11-23 20:54:08.560999637 +0000 UTC m=+0.082536213 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 23 15:54:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:54:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:08 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:09.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:09.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 15:54:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:10 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:54:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205411 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:54:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:11.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 15:54:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:12 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:54:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:14 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:15.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:15 np0005532761 kernel: SELinux:  Converting 2775 SID table entries...
Nov 23 15:54:15 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:54:15 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:54:15 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:54:15 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:54:15 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:54:15 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:54:15 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:54:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 15:54:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:16 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:17.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:54:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:17.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:54:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:17] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:54:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:17] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:54:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:17.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205417 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:54:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:54:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:54:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 15:54:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:18 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:19.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:19 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 23 15:54:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:54:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:20 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:54:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:21.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:21 np0005532761 podman[172365]: 2025-11-23 20:54:21.560415739 +0000 UTC m=+0.069251508 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 15:54:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Nov 23 15:54:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:22 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c001aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:23.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98002270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:23.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:24 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:54:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:24 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:24 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:54:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:24 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:25.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 852 B/s wr, 3 op/s
Nov 23 15:54:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:26 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:27.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:27.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:54:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=plugins.update.checker t=2025-11-23T20:54:27.252062188Z level=info msg="Update check succeeded" duration=49.476457ms
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafana.update.checker t=2025-11-23T20:54:27.257734946Z level=info msg="Update check succeeded" duration=50.192266ms
Nov 23 15:54:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:27.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c001aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:27] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:54:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:27] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:54:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=cleanup t=2025-11-23T20:54:27.728802149Z level=info msg="Completed cleanup jobs" duration=584.859256ms
Nov 23 15:54:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:28 np0005532761 kernel: SELinux:  Converting 2775 SID table entries...
Nov 23 15:54:28 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:54:28 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:54:28 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:54:28 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:54:28 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:54:28 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:54:28 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:54:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 852 B/s wr, 3 op/s
Nov 23 15:54:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:28 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:29.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:29.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:54:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:30 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0027b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:31.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:31.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:54:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:32 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:54:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:54:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:54:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:54:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:54:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:54:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:54:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:54:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:33.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0027b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205433 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:54:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0027b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:33.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:54:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:35.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205436 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 519ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:54:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:36.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Nov 23 15:54:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:36 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:37.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:54:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:37.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:37] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Nov 23 15:54:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:37] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Nov 23 15:54:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Nov 23 15:54:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:38 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:39 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 23 15:54:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:39 np0005532761 podman[172414]: 2025-11-23 20:54:39.612756617 +0000 UTC m=+0.117402090 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 15:54:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:40.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 23 15:54:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:40 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:41.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:42.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:54:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:42 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:44.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:54:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:44 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:46.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:54:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:54:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:46 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:47.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:54:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:47] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Nov 23 15:54:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:47] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Nov 23 15:54:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:54:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:54:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:48.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:54:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:48 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:54:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:54:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:50 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:51.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:54:51.849 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:54:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:54:51.850 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:54:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:54:51.850 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:54:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:52.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:52 np0005532761 podman[180342]: 2025-11-23 20:54:52.539885502 +0000 UTC m=+0.056020010 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 15:54:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:52 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:54:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:54:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:52 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:54.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:54:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:54 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf740016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:55.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 23 15:54:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:56 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:56.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:54:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:56 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:54:57.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:54:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 15:54:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 15:54:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:57.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf740016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 23 15:54:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:57] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 23 15:54:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:54:57] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 23 15:54:58 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:58 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:58 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 15:54:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 15:54:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205458 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:54:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:58.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:54:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:58 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:54:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:54:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000051s ======
Nov 23 15:54:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:59.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 23 15:54:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:54:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:59 np0005532761 podman[185350]: 2025-11-23 20:54:59.674660475 +0000 UTC m=+0.036973800 container create 91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:54:59 np0005532761 systemd[1]: Started libpod-conmon-91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6.scope.
Nov 23 15:54:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:54:59 np0005532761 podman[185350]: 2025-11-23 20:54:59.752749463 +0000 UTC m=+0.115062848 container init 91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kapitsa, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:54:59 np0005532761 podman[185350]: 2025-11-23 20:54:59.657655989 +0000 UTC m=+0.019969344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:54:59 np0005532761 podman[185350]: 2025-11-23 20:54:59.760140637 +0000 UTC m=+0.122453962 container start 91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:54:59 np0005532761 podman[185350]: 2025-11-23 20:54:59.7637026 +0000 UTC m=+0.126015935 container attach 91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kapitsa, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:54:59 np0005532761 determined_kapitsa[185408]: 167 167
Nov 23 15:54:59 np0005532761 systemd[1]: libpod-91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6.scope: Deactivated successfully.
Nov 23 15:54:59 np0005532761 podman[185350]: 2025-11-23 20:54:59.769878242 +0000 UTC m=+0.132191567 container died 91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Nov 23 15:54:59 np0005532761 systemd[1]: var-lib-containers-storage-overlay-bda436228bf78eacad92a3b1348f567152a01d5c45117f4907a0dbfde37b4b6f-merged.mount: Deactivated successfully.
Nov 23 15:54:59 np0005532761 podman[185350]: 2025-11-23 20:54:59.814196214 +0000 UTC m=+0.176509549 container remove 91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 15:54:59 np0005532761 systemd[1]: libpod-conmon-91693b8725519c36a3c12f33e53da6db9c75aa353ba1a6216533d154649e4de6.scope: Deactivated successfully.
Nov 23 15:54:59 np0005532761 podman[185596]: 2025-11-23 20:54:59.988210197 +0000 UTC m=+0.043751568 container create c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 23 15:55:00 np0005532761 systemd[1]: Started libpod-conmon-c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e.scope.
Nov 23 15:55:00 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:55:00 np0005532761 podman[185596]: 2025-11-23 20:54:59.967307429 +0000 UTC m=+0.022848830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:55:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676a0d5594eb1276375726b75e04f073de9dd051fcadf98e9a7b58decc33f52c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676a0d5594eb1276375726b75e04f073de9dd051fcadf98e9a7b58decc33f52c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676a0d5594eb1276375726b75e04f073de9dd051fcadf98e9a7b58decc33f52c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676a0d5594eb1276375726b75e04f073de9dd051fcadf98e9a7b58decc33f52c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676a0d5594eb1276375726b75e04f073de9dd051fcadf98e9a7b58decc33f52c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:00 np0005532761 podman[185596]: 2025-11-23 20:55:00.076789391 +0000 UTC m=+0.132330772 container init c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:55:00 np0005532761 podman[185596]: 2025-11-23 20:55:00.08706503 +0000 UTC m=+0.142606391 container start c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:55:00 np0005532761 podman[185596]: 2025-11-23 20:55:00.091611429 +0000 UTC m=+0.147152790 container attach c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heyrovsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:55:00 np0005532761 upbeat_heyrovsky[185669]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:55:00 np0005532761 upbeat_heyrovsky[185669]: --> All data devices are unavailable
Nov 23 15:55:00 np0005532761 systemd[1]: libpod-c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e.scope: Deactivated successfully.
Nov 23 15:55:00 np0005532761 podman[185596]: 2025-11-23 20:55:00.443379185 +0000 UTC m=+0.498920566 container died c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:55:00 np0005532761 systemd[1]: var-lib-containers-storage-overlay-676a0d5594eb1276375726b75e04f073de9dd051fcadf98e9a7b58decc33f52c-merged.mount: Deactivated successfully.
Nov 23 15:55:00 np0005532761 podman[185596]: 2025-11-23 20:55:00.490457949 +0000 UTC m=+0.545999310 container remove c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:55:00 np0005532761 systemd[1]: libpod-conmon-c890d973d6b35ccd965256da3aca3d25d2df7211fa3d26d2252c1a930a43077e.scope: Deactivated successfully.
Nov 23 15:55:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:00.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:55:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:00 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:01 np0005532761 podman[186407]: 2025-11-23 20:55:01.030353208 +0000 UTC m=+0.042594589 container create d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:55:01 np0005532761 systemd[1]: Started libpod-conmon-d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5.scope.
Nov 23 15:55:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:55:01 np0005532761 podman[186407]: 2025-11-23 20:55:01.010435365 +0000 UTC m=+0.022676766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:55:01 np0005532761 podman[186407]: 2025-11-23 20:55:01.120388218 +0000 UTC m=+0.132629639 container init d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 15:55:01 np0005532761 podman[186407]: 2025-11-23 20:55:01.128220694 +0000 UTC m=+0.140462075 container start d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:55:01 np0005532761 loving_hopper[186481]: 167 167
Nov 23 15:55:01 np0005532761 systemd[1]: libpod-d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5.scope: Deactivated successfully.
Nov 23 15:55:01 np0005532761 podman[186407]: 2025-11-23 20:55:01.135965687 +0000 UTC m=+0.148207118 container attach d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:55:01 np0005532761 podman[186407]: 2025-11-23 20:55:01.136654845 +0000 UTC m=+0.148896236 container died d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:55:01 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0f85873345f719bfa3669a4896439fea5a46eed51529fb71830d7e454e5a6f38-merged.mount: Deactivated successfully.
Nov 23 15:55:01 np0005532761 podman[186407]: 2025-11-23 20:55:01.17877321 +0000 UTC m=+0.191014591 container remove d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:55:01 np0005532761 systemd[1]: libpod-conmon-d77963240693036e86da13831cca7f88d6f3ba40560e9bc3e0369cc4176028c5.scope: Deactivated successfully.
Nov 23 15:55:01 np0005532761 podman[186637]: 2025-11-23 20:55:01.355141125 +0000 UTC m=+0.054650794 container create 0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_kilby, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:55:01 np0005532761 systemd[1]: Started libpod-conmon-0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29.scope.
Nov 23 15:55:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:55:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:01.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:55:01 np0005532761 podman[186637]: 2025-11-23 20:55:01.33362256 +0000 UTC m=+0.033132239 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:55:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:55:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505180436ad38ca56cb9fb90a9484ec7c3a47bafe190b45e8a0d3039031c3918/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505180436ad38ca56cb9fb90a9484ec7c3a47bafe190b45e8a0d3039031c3918/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505180436ad38ca56cb9fb90a9484ec7c3a47bafe190b45e8a0d3039031c3918/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505180436ad38ca56cb9fb90a9484ec7c3a47bafe190b45e8a0d3039031c3918/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:01 np0005532761 podman[186637]: 2025-11-23 20:55:01.450643369 +0000 UTC m=+0.150153028 container init 0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:55:01 np0005532761 podman[186637]: 2025-11-23 20:55:01.468871307 +0000 UTC m=+0.168380946 container start 0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_kilby, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 15:55:01 np0005532761 podman[186637]: 2025-11-23 20:55:01.472667727 +0000 UTC m=+0.172177386 container attach 0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:55:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:01 np0005532761 modest_kilby[186714]: {
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:    "1": [
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:        {
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "devices": [
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "/dev/loop3"
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            ],
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "lv_name": "ceph_lv0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "lv_size": "21470642176",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "name": "ceph_lv0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "tags": {
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.cluster_name": "ceph",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.crush_device_class": "",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.encrypted": "0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.osd_id": "1",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.type": "block",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.vdo": "0",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:                "ceph.with_tpm": "0"
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            },
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "type": "block",
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:            "vg_name": "ceph_vg0"
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:        }
Nov 23 15:55:01 np0005532761 modest_kilby[186714]:    ]
Nov 23 15:55:01 np0005532761 modest_kilby[186714]: }
Nov 23 15:55:01 np0005532761 systemd[1]: libpod-0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29.scope: Deactivated successfully.
Nov 23 15:55:01 np0005532761 podman[186637]: 2025-11-23 20:55:01.796242932 +0000 UTC m=+0.495752571 container died 0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_kilby, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:55:01 np0005532761 systemd[1]: var-lib-containers-storage-overlay-505180436ad38ca56cb9fb90a9484ec7c3a47bafe190b45e8a0d3039031c3918-merged.mount: Deactivated successfully.
Nov 23 15:55:01 np0005532761 podman[186637]: 2025-11-23 20:55:01.843735818 +0000 UTC m=+0.543245477 container remove 0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 15:55:01 np0005532761 systemd[1]: libpod-conmon-0a4e3aabf0bb88ea74f79b13804019fd47f14dd6a4ccac9d037fbb298c38ca29.scope: Deactivated successfully.
Nov 23 15:55:02 np0005532761 podman[187387]: 2025-11-23 20:55:02.456733702 +0000 UTC m=+0.045937834 container create 236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:55:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:02 np0005532761 systemd[1]: Started libpod-conmon-236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356.scope.
Nov 23 15:55:02 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:55:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:02.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:02 np0005532761 podman[187387]: 2025-11-23 20:55:02.435939918 +0000 UTC m=+0.025144070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:55:02 np0005532761 podman[187387]: 2025-11-23 20:55:02.536546136 +0000 UTC m=+0.125750318 container init 236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:55:02 np0005532761 podman[187387]: 2025-11-23 20:55:02.543710564 +0000 UTC m=+0.132914696 container start 236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:55:02 np0005532761 podman[187387]: 2025-11-23 20:55:02.548734175 +0000 UTC m=+0.137938317 container attach 236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:55:02 np0005532761 priceless_euler[187455]: 167 167
Nov 23 15:55:02 np0005532761 systemd[1]: libpod-236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356.scope: Deactivated successfully.
Nov 23 15:55:02 np0005532761 podman[187387]: 2025-11-23 20:55:02.552851243 +0000 UTC m=+0.142055425 container died 236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:55:02 np0005532761 systemd[1]: var-lib-containers-storage-overlay-402d43f1364897113412c6d84110e5ccfc4fbda41661ce0b43f0661655487619-merged.mount: Deactivated successfully.
Nov 23 15:55:02 np0005532761 podman[187387]: 2025-11-23 20:55:02.597669289 +0000 UTC m=+0.186873421 container remove 236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 23 15:55:02 np0005532761 systemd[1]: libpod-conmon-236c5face89da1f2034130d64ff32652b4bf071e4953e1f2cfe261d5eedda356.scope: Deactivated successfully.
Nov 23 15:55:02 np0005532761 podman[187596]: 2025-11-23 20:55:02.779538768 +0000 UTC m=+0.046127900 container create 96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:55:02 np0005532761 systemd[1]: Started libpod-conmon-96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f.scope.
Nov 23 15:55:02 np0005532761 podman[187596]: 2025-11-23 20:55:02.757661844 +0000 UTC m=+0.024250996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:55:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:55:02 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:55:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9ebb622c84d7f8f0ac056ea0bd85667f4725c40cade86d5f5daecde4111000/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9ebb622c84d7f8f0ac056ea0bd85667f4725c40cade86d5f5daecde4111000/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9ebb622c84d7f8f0ac056ea0bd85667f4725c40cade86d5f5daecde4111000/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:02 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9ebb622c84d7f8f0ac056ea0bd85667f4725c40cade86d5f5daecde4111000/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:02 np0005532761 podman[187596]: 2025-11-23 20:55:02.882443267 +0000 UTC m=+0.149032409 container init 96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:55:02 np0005532761 podman[187596]: 2025-11-23 20:55:02.894582125 +0000 UTC m=+0.161171257 container start 96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:55:02 np0005532761 podman[187596]: 2025-11-23 20:55:02.898879058 +0000 UTC m=+0.165468200 container attach 96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:55:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:02 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:55:03
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', '.nfs', 'default.rgw.log', 'images']
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:55:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:55:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:55:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:55:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:03.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:03 np0005532761 lvm[188160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:55:03 np0005532761 lvm[188160]: VG ceph_vg0 finished
Nov 23 15:55:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:03 np0005532761 zen_noyce[187673]: {}
Nov 23 15:55:03 np0005532761 systemd[1]: libpod-96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f.scope: Deactivated successfully.
Nov 23 15:55:03 np0005532761 systemd[1]: libpod-96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f.scope: Consumed 1.052s CPU time.
Nov 23 15:55:03 np0005532761 podman[187596]: 2025-11-23 20:55:03.57160336 +0000 UTC m=+0.838192492 container died 96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 23 15:55:03 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5a9ebb622c84d7f8f0ac056ea0bd85667f4725c40cade86d5f5daecde4111000-merged.mount: Deactivated successfully.
Nov 23 15:55:03 np0005532761 podman[187596]: 2025-11-23 20:55:03.621909119 +0000 UTC m=+0.888498241 container remove 96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 15:55:03 np0005532761 systemd[1]: libpod-conmon-96f06e1171dcbc6d8641e79e3e2f3b3cc65d9072f434adfc9b584e8a8e9ffe8f.scope: Deactivated successfully.
Nov 23 15:55:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:55:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:55:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:55:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:55:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:55:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:55:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:55:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:04 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:06.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:55:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:06 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:55:07.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:55:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:07] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:55:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:07] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:55:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:55:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:08 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:55:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:09.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:55:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf780036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:10 np0005532761 podman[190010]: 2025-11-23 20:55:10.553267308 +0000 UTC m=+0.076625961 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 23 15:55:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:55:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:10 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c001480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:55:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:12 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:13.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:55:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:14.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:55:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:55:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:14 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:15.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205516 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:55:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:55:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:55:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:55:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:16 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:55:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:55:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:17.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:17] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:55:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:17] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:55:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:55:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:55:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:55:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:18 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:55:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:19.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:55:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:55:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:20 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:21.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:22.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:22 np0005532761 kernel: SELinux:  Converting 2776 SID table entries...
Nov 23 15:55:22 np0005532761 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:55:22 np0005532761 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:55:22 np0005532761 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:55:22 np0005532761 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:55:22 np0005532761 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:55:22 np0005532761 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:55:22 np0005532761 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:55:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:55:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:22 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:23 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 23 15:55:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:23.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:23 np0005532761 podman[190101]: 2025-11-23 20:55:23.458673853 +0000 UTC m=+0.048382355 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:55:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:23 np0005532761 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Nov 23 15:55:23 np0005532761 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Nov 23 15:55:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:24.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:55:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:24 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:25.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:26 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:55:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:55:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:26 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:55:27.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:55:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:27.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c0023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:27] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 23 15:55:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:27] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 23 15:55:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:28.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:55:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:28 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:55:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:55:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:29.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:30.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Nov 23 15:55:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:30 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:32 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:55:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Nov 23 15:55:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:32 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:55:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:55:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:55:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:55:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:55:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:55:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:55:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:55:33 np0005532761 systemd[1]: Stopping OpenSSH server daemon...
Nov 23 15:55:33 np0005532761 systemd[1]: sshd.service: Deactivated successfully.
Nov 23 15:55:33 np0005532761 systemd[1]: sshd.service: Unit process 190486 (sshd-session) remains running after unit stopped.
Nov 23 15:55:33 np0005532761 systemd[1]: sshd.service: Unit process 190492 (sshd-session) remains running after unit stopped.
Nov 23 15:55:33 np0005532761 systemd[1]: Stopped OpenSSH server daemon.
Nov 23 15:55:33 np0005532761 systemd[1]: sshd.service: Consumed 9.891s CPU time, 37.7M memory peak, read 32.0K from disk, written 552.0K to disk.
Nov 23 15:55:33 np0005532761 systemd[1]: Stopped target sshd-keygen.target.
Nov 23 15:55:33 np0005532761 systemd[1]: Stopping sshd-keygen.target...
Nov 23 15:55:33 np0005532761 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:55:33 np0005532761 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:55:33 np0005532761 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:55:33 np0005532761 systemd[1]: Reached target sshd-keygen.target.
Nov 23 15:55:33 np0005532761 systemd[1]: Starting OpenSSH server daemon...
Nov 23 15:55:33 np0005532761 systemd[1]: Started OpenSSH server daemon.
Nov 23 15:55:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:33.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:34.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:55:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:35 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:55:35 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:55:35 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:35.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:35 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:35 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:35 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:55:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:36.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:55:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:55:37.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:55:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:55:37.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:55:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:37.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:37] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:55:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:37] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:55:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205538 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:55:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:38.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:55:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:39.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:40.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:55:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:41.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:41 np0005532761 podman[197536]: 2025-11-23 20:55:41.565359504 +0000 UTC m=+0.084077361 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:55:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:42.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Nov 23 15:55:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:43.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:43 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:55:43 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:55:43 np0005532761 systemd[1]: man-db-cache-update.service: Consumed 10.558s CPU time.
Nov 23 15:55:43 np0005532761 systemd[1]: run-rbb189f0c62a645369869469e63b5dc60.service: Deactivated successfully.
Nov 23 15:55:44 np0005532761 python3.9[199921]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:44 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:44 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:44 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:44.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 23 15:55:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:45 np0005532761 python3.9[200114]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:45 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:45 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:45 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:45.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:46 np0005532761 python3.9[200304]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:46 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:46 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:46 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:46.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:55:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:55:47.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:55:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:47.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:47 np0005532761 python3.9[200496]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:47 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:47 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:47 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:47] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:55:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:47] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 23 15:55:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:55:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:55:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:55:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:49 np0005532761 python3.9[200687]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:49.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:49 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:49 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:49 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:50 np0005532761 python3.9[200876]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:50 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:50 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:50 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:55:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:51.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:51 np0005532761 python3.9[201067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:55:51.851 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:55:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:55:51.851 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:55:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:55:51.851 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:55:51 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:51 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:51 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:55:52 np0005532761 python3.9[201258]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:53.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:53 np0005532761 python3.9[201414]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:53 np0005532761 podman[201416]: 2025-11-23 20:55:53.770961274 +0000 UTC m=+0.048049357 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 15:55:53 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:53 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:53 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:54.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:55:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:55.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:56.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:56 np0005532761 python3.9[201626]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:55:56 np0005532761 systemd[1]: Reloading.
Nov 23 15:55:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:57 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:55:57.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:55:57 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:57 np0005532761 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 23 15:55:57 np0005532761 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 23 15:55:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:57.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:57] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:55:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:55:57] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:55:58 np0005532761 python3.9[201819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:58.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:55:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:59 np0005532761 python3.9[201976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:55:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:59.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:55:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:59 np0005532761 python3.9[202131]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:00 np0005532761 python3.9[202311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:56:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:01.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:01 np0005532761 python3.9[202470]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:02 np0005532761 python3.9[202625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:03 np0005532761 python3.9[202781]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:56:03
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'default.rgw.control', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'backups']
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:56:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:56:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:56:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:56:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:03.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:03 np0005532761 python3.9[202939]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:04.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:04 np0005532761 podman[203064]: 2025-11-23 20:56:04.628503062 +0000 UTC m=+0.054151820 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:56:04 np0005532761 podman[203064]: 2025-11-23 20:56:04.741225108 +0000 UTC m=+0.166873846 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 15:56:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:56:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:05 np0005532761 podman[203262]: 2025-11-23 20:56:05.164146695 +0000 UTC m=+0.047736658 container exec c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:56:05 np0005532761 podman[203262]: 2025-11-23 20:56:05.19496358 +0000 UTC m=+0.078553533 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:56:05 np0005532761 podman[203421]: 2025-11-23 20:56:05.475558787 +0000 UTC m=+0.046365701 container exec 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:56:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:05.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:05 np0005532761 podman[203421]: 2025-11-23 20:56:05.489164112 +0000 UTC m=+0.059971046 container exec_died 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:56:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:05 np0005532761 python3.9[203385]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:05 np0005532761 podman[203487]: 2025-11-23 20:56:05.696779797 +0000 UTC m=+0.056725459 container exec cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:56:05 np0005532761 podman[203487]: 2025-11-23 20:56:05.726105432 +0000 UTC m=+0.086051064 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 15:56:05 np0005532761 podman[203636]: 2025-11-23 20:56:05.926632417 +0000 UTC m=+0.053299887 container exec 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph)
Nov 23 15:56:05 np0005532761 podman[203636]: 2025-11-23 20:56:05.980184971 +0000 UTC m=+0.106852441 container exec_died 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, architecture=x86_64, release=1793, description=keepalived for Ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container)
Nov 23 15:56:06 np0005532761 podman[203768]: 2025-11-23 20:56:06.187255312 +0000 UTC m=+0.052208079 container exec 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:56:06 np0005532761 podman[203768]: 2025-11-23 20:56:06.222226988 +0000 UTC m=+0.087179755 container exec_died 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:56:06 np0005532761 python3.9[203749]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:06 np0005532761 podman[203846]: 2025-11-23 20:56:06.457132152 +0000 UTC m=+0.055311590 container exec 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:56:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:06 np0005532761 podman[203846]: 2025-11-23 20:56:06.687180129 +0000 UTC m=+0.285359517 container exec_died 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 15:56:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:07.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:56:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:07.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:56:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:07.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:56:07 np0005532761 podman[204112]: 2025-11-23 20:56:07.146645933 +0000 UTC m=+0.056875083 container exec 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:56:07 np0005532761 podman[204112]: 2025-11-23 20:56:07.217077217 +0000 UTC m=+0.127306367 container exec_died 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:56:07 np0005532761 python3.9[204079]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:56:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:56:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:07.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:56:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:56:08 np0005532761 python3.9[204357]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:56:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:08.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:56:08 np0005532761 podman[204640]: 2025-11-23 20:56:08.69112872 +0000 UTC m=+0.076377105 container create 011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:56:08 np0005532761 podman[204640]: 2025-11-23 20:56:08.645355975 +0000 UTC m=+0.030604380 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:56:08 np0005532761 systemd[1]: Started libpod-conmon-011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce.scope.
Nov 23 15:56:08 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:56:08 np0005532761 podman[204640]: 2025-11-23 20:56:08.801284278 +0000 UTC m=+0.186532673 container init 011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:56:08 np0005532761 podman[204640]: 2025-11-23 20:56:08.807860014 +0000 UTC m=+0.193108379 container start 011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:56:08 np0005532761 podman[204640]: 2025-11-23 20:56:08.81256128 +0000 UTC m=+0.197809645 container attach 011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:56:08 np0005532761 elegant_mirzakhani[204656]: 167 167
Nov 23 15:56:08 np0005532761 systemd[1]: libpod-011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce.scope: Deactivated successfully.
Nov 23 15:56:08 np0005532761 podman[204640]: 2025-11-23 20:56:08.813502524 +0000 UTC m=+0.198750889 container died 011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:56:08 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c2f0e54f4ebd77158b75106e9f5ebad4f3102d6434a371ef84f140adee7b6565-merged.mount: Deactivated successfully.
Nov 23 15:56:08 np0005532761 podman[204640]: 2025-11-23 20:56:08.852329134 +0000 UTC m=+0.237577499 container remove 011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 15:56:08 np0005532761 systemd[1]: libpod-conmon-011cdbeecbcf59bce9befda8a94af6a101d3c547f5cf8839fdb7896861d422ce.scope: Deactivated successfully.
Nov 23 15:56:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:08 np0005532761 python3.9[204623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:09 np0005532761 podman[204684]: 2025-11-23 20:56:09.001371341 +0000 UTC m=+0.038484180 container create 126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:56:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:09 np0005532761 systemd[1]: Started libpod-conmon-126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73.scope.
Nov 23 15:56:09 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:56:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf4668c4b2560082c2fd9a5ec32a17e9df86ae45a95e1459ce3f25b37bae814/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf4668c4b2560082c2fd9a5ec32a17e9df86ae45a95e1459ce3f25b37bae814/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf4668c4b2560082c2fd9a5ec32a17e9df86ae45a95e1459ce3f25b37bae814/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf4668c4b2560082c2fd9a5ec32a17e9df86ae45a95e1459ce3f25b37bae814/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:09 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf4668c4b2560082c2fd9a5ec32a17e9df86ae45a95e1459ce3f25b37bae814/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:09 np0005532761 podman[204684]: 2025-11-23 20:56:08.985092136 +0000 UTC m=+0.022205005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:56:09 np0005532761 podman[204684]: 2025-11-23 20:56:09.085529484 +0000 UTC m=+0.122642323 container init 126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:56:09 np0005532761 podman[204684]: 2025-11-23 20:56:09.095726227 +0000 UTC m=+0.132839066 container start 126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:56:09 np0005532761 podman[204684]: 2025-11-23 20:56:09.098662805 +0000 UTC m=+0.135775664 container attach 126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 15:56:09 np0005532761 sharp_mendeleev[204724]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:56:09 np0005532761 sharp_mendeleev[204724]: --> All data devices are unavailable
Nov 23 15:56:09 np0005532761 systemd[1]: libpod-126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73.scope: Deactivated successfully.
Nov 23 15:56:09 np0005532761 podman[204684]: 2025-11-23 20:56:09.418252477 +0000 UTC m=+0.455365326 container died 126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:56:09 np0005532761 systemd[1]: var-lib-containers-storage-overlay-baf4668c4b2560082c2fd9a5ec32a17e9df86ae45a95e1459ce3f25b37bae814-merged.mount: Deactivated successfully.
Nov 23 15:56:09 np0005532761 podman[204684]: 2025-11-23 20:56:09.469633562 +0000 UTC m=+0.506746411 container remove 126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:56:09 np0005532761 systemd[1]: libpod-conmon-126433b7e8b425885df0079e69eb0b6eb7921354b0b3ccc09be1ae1725c1ac73.scope: Deactivated successfully.
Nov 23 15:56:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:09.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:09 np0005532761 python3.9[204867]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:10 np0005532761 podman[204999]: 2025-11-23 20:56:10.006649401 +0000 UTC m=+0.042911479 container create 986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_borg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:56:10 np0005532761 systemd[1]: Started libpod-conmon-986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5.scope.
Nov 23 15:56:10 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:56:10 np0005532761 podman[204999]: 2025-11-23 20:56:10.073256643 +0000 UTC m=+0.109518741 container init 986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:56:10 np0005532761 podman[204999]: 2025-11-23 20:56:10.079212183 +0000 UTC m=+0.115474271 container start 986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_borg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:56:10 np0005532761 podman[204999]: 2025-11-23 20:56:10.082493921 +0000 UTC m=+0.118756089 container attach 986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_borg, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:56:10 np0005532761 stoic_borg[205015]: 167 167
Nov 23 15:56:10 np0005532761 systemd[1]: libpod-986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5.scope: Deactivated successfully.
Nov 23 15:56:10 np0005532761 podman[204999]: 2025-11-23 20:56:10.084677969 +0000 UTC m=+0.120940047 container died 986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_borg, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:56:10 np0005532761 podman[204999]: 2025-11-23 20:56:09.989971155 +0000 UTC m=+0.026233263 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:56:10 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1ffb1435154a34074a34a02a8fa56e5c8e97ac93fdfd307d80d6aa1507db942c-merged.mount: Deactivated successfully.
Nov 23 15:56:10 np0005532761 podman[204999]: 2025-11-23 20:56:10.138055007 +0000 UTC m=+0.174317125 container remove 986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:56:10 np0005532761 systemd[1]: libpod-conmon-986238a50c326fe37028ab0ee1c9d322ead2795a10de9288a68324bbe153c3b5.scope: Deactivated successfully.
Nov 23 15:56:10 np0005532761 podman[205039]: 2025-11-23 20:56:10.317481268 +0000 UTC m=+0.048831977 container create f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_beaver, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:56:10 np0005532761 systemd[1]: Started libpod-conmon-f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459.scope.
Nov 23 15:56:10 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:56:10 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e109ee918d8a9d0bb63345323da0dd5e8a79fef647173cd40b6b12474d4a3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:10 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e109ee918d8a9d0bb63345323da0dd5e8a79fef647173cd40b6b12474d4a3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:10 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e109ee918d8a9d0bb63345323da0dd5e8a79fef647173cd40b6b12474d4a3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:10 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e109ee918d8a9d0bb63345323da0dd5e8a79fef647173cd40b6b12474d4a3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:10 np0005532761 podman[205039]: 2025-11-23 20:56:10.297230077 +0000 UTC m=+0.028580846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:56:10 np0005532761 podman[205039]: 2025-11-23 20:56:10.396888603 +0000 UTC m=+0.128239332 container init f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_beaver, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:56:10 np0005532761 podman[205039]: 2025-11-23 20:56:10.407832696 +0000 UTC m=+0.139183415 container start f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:56:10 np0005532761 podman[205039]: 2025-11-23 20:56:10.411007581 +0000 UTC m=+0.142358310 container attach f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:56:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]: {
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:    "1": [
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:        {
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "devices": [
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "/dev/loop3"
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            ],
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "lv_name": "ceph_lv0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "lv_size": "21470642176",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "name": "ceph_lv0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "tags": {
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.cluster_name": "ceph",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.crush_device_class": "",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.encrypted": "0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.osd_id": "1",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.type": "block",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.vdo": "0",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:                "ceph.with_tpm": "0"
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            },
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "type": "block",
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:            "vg_name": "ceph_vg0"
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:        }
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]:    ]
Nov 23 15:56:10 np0005532761 exciting_beaver[205056]: }
Nov 23 15:56:10 np0005532761 systemd[1]: libpod-f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459.scope: Deactivated successfully.
Nov 23 15:56:10 np0005532761 podman[205039]: 2025-11-23 20:56:10.695787561 +0000 UTC m=+0.427138270 container died f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:56:10 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e1e109ee918d8a9d0bb63345323da0dd5e8a79fef647173cd40b6b12474d4a3f-merged.mount: Deactivated successfully.
Nov 23 15:56:10 np0005532761 podman[205039]: 2025-11-23 20:56:10.768732853 +0000 UTC m=+0.500083562 container remove f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_beaver, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 15:56:10 np0005532761 systemd[1]: libpod-conmon-f5e68e30e898488ca39f8c062a16dde39ce8ba1d9da535613a1151ce63d83459.scope: Deactivated successfully.
Nov 23 15:56:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:56:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:11 np0005532761 podman[205173]: 2025-11-23 20:56:11.318499473 +0000 UTC m=+0.044587053 container create 581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:56:11 np0005532761 systemd[1]: Started libpod-conmon-581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e.scope.
Nov 23 15:56:11 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:56:11 np0005532761 podman[205173]: 2025-11-23 20:56:11.296545067 +0000 UTC m=+0.022632627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:56:11 np0005532761 podman[205173]: 2025-11-23 20:56:11.406348854 +0000 UTC m=+0.132436414 container init 581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_gagarin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 15:56:11 np0005532761 podman[205173]: 2025-11-23 20:56:11.416593688 +0000 UTC m=+0.142681228 container start 581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:56:11 np0005532761 podman[205173]: 2025-11-23 20:56:11.419223769 +0000 UTC m=+0.145311309 container attach 581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 15:56:11 np0005532761 loving_gagarin[205190]: 167 167
Nov 23 15:56:11 np0005532761 systemd[1]: libpod-581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e.scope: Deactivated successfully.
Nov 23 15:56:11 np0005532761 podman[205173]: 2025-11-23 20:56:11.421173702 +0000 UTC m=+0.147261242 container died 581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_gagarin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:56:11 np0005532761 systemd[1]: var-lib-containers-storage-overlay-898a95ae06ba874d36cc16fbb159e9d85fe7171fb0eeb1523737db73bccc37f3-merged.mount: Deactivated successfully.
Nov 23 15:56:11 np0005532761 podman[205173]: 2025-11-23 20:56:11.458379847 +0000 UTC m=+0.184467417 container remove 581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 15:56:11 np0005532761 systemd[1]: libpod-conmon-581bfd77da7bf329a8a45f8d54d9e1153ef02a00fa60f302d8c29ace20420c8e.scope: Deactivated successfully.
Nov 23 15:56:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:11.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:11 np0005532761 podman[205214]: 2025-11-23 20:56:11.639649828 +0000 UTC m=+0.043586058 container create a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:56:11 np0005532761 systemd[1]: Started libpod-conmon-a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626.scope.
Nov 23 15:56:11 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:56:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e41fe69b52b8b9e659f9280d391aaeddad75a37cb2e33f19f677c8d4515fc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e41fe69b52b8b9e659f9280d391aaeddad75a37cb2e33f19f677c8d4515fc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e41fe69b52b8b9e659f9280d391aaeddad75a37cb2e33f19f677c8d4515fc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:11 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e41fe69b52b8b9e659f9280d391aaeddad75a37cb2e33f19f677c8d4515fc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:56:11 np0005532761 podman[205214]: 2025-11-23 20:56:11.712651791 +0000 UTC m=+0.116588071 container init a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:56:11 np0005532761 podman[205214]: 2025-11-23 20:56:11.620198657 +0000 UTC m=+0.024134897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:56:11 np0005532761 podman[205214]: 2025-11-23 20:56:11.722606587 +0000 UTC m=+0.126542817 container start a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:56:11 np0005532761 podman[205214]: 2025-11-23 20:56:11.726216604 +0000 UTC m=+0.130152844 container attach a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_diffie, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 15:56:11 np0005532761 podman[205228]: 2025-11-23 20:56:11.762797693 +0000 UTC m=+0.084446782 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 15:56:12 np0005532761 lvm[205330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:56:12 np0005532761 lvm[205330]: VG ceph_vg0 finished
Nov 23 15:56:12 np0005532761 sharp_diffie[205232]: {}
Nov 23 15:56:12 np0005532761 systemd[1]: libpod-a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626.scope: Deactivated successfully.
Nov 23 15:56:12 np0005532761 systemd[1]: libpod-a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626.scope: Consumed 1.007s CPU time.
Nov 23 15:56:12 np0005532761 podman[205214]: 2025-11-23 20:56:12.413749371 +0000 UTC m=+0.817685601 container died a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_diffie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 15:56:12 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d9e41fe69b52b8b9e659f9280d391aaeddad75a37cb2e33f19f677c8d4515fc6-merged.mount: Deactivated successfully.
Nov 23 15:56:12 np0005532761 podman[205214]: 2025-11-23 20:56:12.452465566 +0000 UTC m=+0.856401796 container remove a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_diffie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:56:12 np0005532761 systemd[1]: libpod-conmon-a1a9eda6e942771097fa4e8703129b33e49bf9ea0804d6061817ec688bd53626.scope: Deactivated successfully.
Nov 23 15:56:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:56:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:56:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:13.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:14 np0005532761 python3.9[205500]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:14.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:56:14 np0005532761 python3.9[205654]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:15.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:15 np0005532761 python3.9[205807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:16 np0005532761 python3.9[205959]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:16.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:17 np0005532761 python3.9[206112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:17.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:56:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:56:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:17.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:56:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:56:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:56:17 np0005532761 python3.9[206265]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:56:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:56:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:18.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:19.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:19 np0005532761 python3.9[206419]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:20 np0005532761 python3.9[206544]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931378.8016827-1622-98389766897715/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:20 np0005532761 python3.9[206722]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:56:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:21 np0005532761 python3.9[206848]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931380.3492975-1622-90211535817170/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:21.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:21 np0005532761 python3.9[207000]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:22 np0005532761 python3.9[207125]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931381.5406501-1622-233614486243623/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:22.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:23 np0005532761 python3.9[207278]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:23.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:23 np0005532761 python3.9[207404]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931382.6340785-1622-233780467307822/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:24 np0005532761 podman[207556]: 2025-11-23 20:56:24.188035398 +0000 UTC m=+0.067503097 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 15:56:24 np0005532761 python3.9[207557]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:24.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:56:24 np0005532761 python3.9[207703]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931383.8313637-1622-9449026834754/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:25.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:25 np0005532761 python3.9[207856]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205625 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:56:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:26 np0005532761 python3.9[207981]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931385.0674999-1622-78994344199133/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:26 np0005532761 python3.9[208133]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:26.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:56:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:27.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:56:27 np0005532761 python3.9[208260]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931386.1836538-1622-84694801264417/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:27.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:27 np0005532761 python3.9[208412]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:56:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:56:28 np0005532761 python3.9[208537]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931387.2834377-1622-142914176485056/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:28.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:56:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:29.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:30.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:56:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:31 np0005532761 python3.9[208694]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 23 15:56:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:31.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf74003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:32 np0005532761 python3.9[208847]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:32.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:56:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:33 np0005532761 python3.9[209000]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:56:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:56:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:56:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:56:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:56:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:56:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:56:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:56:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:33.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c001fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:33 np0005532761 python3.9[209154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:34 np0005532761 python3.9[209306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:34.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:34 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:56:34 np0005532761 python3.9[209461]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:56:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:35.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:35 np0005532761 python3.9[209614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:36 np0005532761 python3.9[209766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:36.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:36 np0005532761 python3.9[209919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c001fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:37.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:37.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:56:37 np0005532761 python3.9[210072]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:37.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.623273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397623310, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4198, "num_deletes": 502, "total_data_size": 8614490, "memory_usage": 8740928, "flush_reason": "Manual Compaction"}
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397715084, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8359352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13173, "largest_seqno": 17370, "table_properties": {"data_size": 8341630, "index_size": 11976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36450, "raw_average_key_size": 19, "raw_value_size": 8305186, "raw_average_value_size": 4482, "num_data_blocks": 524, "num_entries": 1853, "num_filter_entries": 1853, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930950, "oldest_key_time": 1763930950, "file_creation_time": 1763931397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 91851 microseconds, and 14100 cpu microseconds.
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.715130) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8359352 bytes OK
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.715147) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.716456) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.716470) EVENT_LOG_v1 {"time_micros": 1763931397716467, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.716486) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8597718, prev total WAL file size 8597718, number of live WAL files 2.
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.718292) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8163KB)], [32(12MB)]
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397718339, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 21794303, "oldest_snapshot_seqno": -1}
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:37] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:56:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:37] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:56:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5076 keys, 15937433 bytes, temperature: kUnknown
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397945541, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15937433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15898653, "index_size": 24974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 126978, "raw_average_key_size": 25, "raw_value_size": 15801864, "raw_average_value_size": 3113, "num_data_blocks": 1050, "num_entries": 5076, "num_filter_entries": 5076, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763931397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.946044) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15937433 bytes
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.948479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.9 rd, 70.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.8 +0.0 blob) out(15.2 +0.0 blob), read-write-amplify(4.5) write-amplify(1.9) OK, records in: 6098, records dropped: 1022 output_compression: NoCompression
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.948522) EVENT_LOG_v1 {"time_micros": 1763931397948502, "job": 14, "event": "compaction_finished", "compaction_time_micros": 227358, "compaction_time_cpu_micros": 34151, "output_level": 6, "num_output_files": 1, "total_output_size": 15937433, "num_input_records": 6098, "num_output_records": 5076, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397952585, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397958572, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.718231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.958715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.958724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.958727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.958730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:56:37.958733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:38 np0005532761 python3.9[210224]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:38 np0005532761 python3.9[210377]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:56:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:39 np0005532761 python3.9[210530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:39.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c001fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:40 np0005532761 python3.9[210682]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:56:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:40.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:56:40 np0005532761 python3.9[210860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:56:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:40 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:56:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:41.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:42 np0005532761 podman[210886]: 2025-11-23 20:56:42.595035024 +0000 UTC m=+0.112528252 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 15:56:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:42.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:56:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:43.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:43 np0005532761 python3.9[211041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:44 np0005532761 python3.9[211164]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931403.0064812-2285-39888338655148/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:56:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:44.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:56:44 np0005532761 python3.9[211317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:56:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:45 np0005532761 python3.9[211441]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931404.3700843-2285-166568594701693/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:45.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:46 np0005532761 python3.9[211593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:46 np0005532761 python3.9[211716]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931405.5599656-2285-82614282308408/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:46.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:56:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:47.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:56:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:47 np0005532761 python3.9[211870]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:47.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf8c001fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205647 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:56:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:47] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:56:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:47] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:56:47 np0005532761 python3.9[211993]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931406.7989755-2285-40395771092437/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:56:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:56:48 np0005532761 python3.9[212145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:48.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:56:48 np0005532761 python3.9[212271]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931407.983534-2285-26601774290815/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:49.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:49 np0005532761 python3.9[212425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:50 np0005532761 python3.9[212548]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931409.1007595-2285-89030331180882/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:50.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:50 np0005532761 python3.9[212701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:56:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:51 np0005532761 python3.9[212825]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931410.2655902-2285-252778125884952/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:51.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:56:51.852 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:56:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:56:51.852 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:56:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:56:51.852 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:56:51 np0005532761 python3.9[212977]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:52 np0005532761 python3.9[213100]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931411.429359-2285-188095794176894/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:52.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:56:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:53 np0005532761 python3.9[213254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:53.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:53 np0005532761 python3.9[213377]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931412.7021465-2285-45340209725091/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:54 np0005532761 python3.9[213529]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:54 np0005532761 podman[213559]: 2025-11-23 20:56:54.554649331 +0000 UTC m=+0.078913732 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 15:56:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:54.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:54 np0005532761 python3.9[213672]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931413.9058301-2285-35875688285329/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:56:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:55 np0005532761 python3.9[213825]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205655 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:56:56 np0005532761 python3.9[213948]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931415.038834-2285-142032261345222/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:56.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:56 np0005532761 python3.9[214101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:56:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:56:57.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:56:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:57 np0005532761 python3.9[214225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931416.3191307-2285-74404599048254/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:57.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:57] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 23 15:56:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:56:57] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 23 15:56:57 np0005532761 python3.9[214377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:58 np0005532761 python3.9[214500]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931417.542252-2285-128868700402670/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:58.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:56:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:59 np0005532761 python3.9[214653]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:56:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:59.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:56:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:59 np0005532761 python3.9[214777]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931418.660286-2285-198653183402313/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:00.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:57:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:01.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf840029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:02.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:57:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:57:03
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.nfs', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'images']
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:57:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:57:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:57:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:57:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:03.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:04 np0005532761 python3.9[214956]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:04.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:04 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:57:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:57:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:05.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40048a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:05 np0005532761 python3.9[215113]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 23 15:57:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:07.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:07.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:07.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:07.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40048c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:07 np0005532761 dbus-broker-launch[806]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:57:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:57:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:57:07 np0005532761 python3.9[215271]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:08 np0005532761 python3.9[215423]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:08.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:57:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:09 np0005532761 python3.9[215577]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:09.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:09 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:09 np0005532761 python3.9[215729]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:10 np0005532761 python3.9[215881]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:10.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:10 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:57:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:57:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:11.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa40048e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:11 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.506224) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432506440, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 510, "num_deletes": 252, "total_data_size": 596358, "memory_usage": 606672, "flush_reason": "Manual Compaction"}
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432511269, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 411643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17372, "largest_seqno": 17880, "table_properties": {"data_size": 409057, "index_size": 622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6605, "raw_average_key_size": 19, "raw_value_size": 403893, "raw_average_value_size": 1191, "num_data_blocks": 28, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931398, "oldest_key_time": 1763931398, "file_creation_time": 1763931432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 5364 microseconds, and 2036 cpu microseconds.
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.511583) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 411643 bytes OK
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.511730) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.513540) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.513557) EVENT_LOG_v1 {"time_micros": 1763931432513552, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.513572) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 593484, prev total WAL file size 593484, number of live WAL files 2.
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.515139) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(401KB)], [35(15MB)]
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432515166, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 16349076, "oldest_snapshot_seqno": -1}
Nov 23 15:57:12 np0005532761 python3.9[216035]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4914 keys, 12435003 bytes, temperature: kUnknown
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432686740, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12435003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12401575, "index_size": 20070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 123996, "raw_average_key_size": 25, "raw_value_size": 12311833, "raw_average_value_size": 2505, "num_data_blocks": 836, "num_entries": 4914, "num_filter_entries": 4914, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763931432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.686960) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12435003 bytes
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689208) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.2 rd, 72.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 15.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(69.9) write-amplify(30.2) OK, records in: 5415, records dropped: 501 output_compression: NoCompression
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689222) EVENT_LOG_v1 {"time_micros": 1763931432689216, "job": 16, "event": "compaction_finished", "compaction_time_micros": 171660, "compaction_time_cpu_micros": 25763, "output_level": 6, "num_output_files": 1, "total_output_size": 12435003, "num_input_records": 5415, "num_output_records": 4914, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432689506, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432692407, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.515041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.692502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.692509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.692512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.692515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:57:12.692518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:12.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:57:12 np0005532761 podman[216140]: 2025-11-23 20:57:12.999673075 +0000 UTC m=+0.130788271 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:57:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf800045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:13 np0005532761 python3.9[216265]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:13.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:13 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:57:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:57:13 np0005532761 python3.9[216450]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:14 np0005532761 podman[216638]: 2025-11-23 20:57:14.123560159 +0000 UTC m=+0.040621108 container create 34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:57:14 np0005532761 systemd[1]: Started libpod-conmon-34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da.scope.
Nov 23 15:57:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:57:14 np0005532761 podman[216638]: 2025-11-23 20:57:14.10346209 +0000 UTC m=+0.020523059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:57:14 np0005532761 podman[216638]: 2025-11-23 20:57:14.207614508 +0000 UTC m=+0.124675487 container init 34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:57:14 np0005532761 podman[216638]: 2025-11-23 20:57:14.214988525 +0000 UTC m=+0.132049474 container start 34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:57:14 np0005532761 podman[216638]: 2025-11-23 20:57:14.21819113 +0000 UTC m=+0.135252109 container attach 34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:57:14 np0005532761 reverent_torvalds[216681]: 167 167
Nov 23 15:57:14 np0005532761 systemd[1]: libpod-34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da.scope: Deactivated successfully.
Nov 23 15:57:14 np0005532761 podman[216638]: 2025-11-23 20:57:14.222162297 +0000 UTC m=+0.139223246 container died 34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_torvalds, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 15:57:14 np0005532761 systemd[1]: var-lib-containers-storage-overlay-789b1b7d79ed7478b166ae72436e7f8b4e3277cf28ca5431c877f265f853db1d-merged.mount: Deactivated successfully.
Nov 23 15:57:14 np0005532761 podman[216638]: 2025-11-23 20:57:14.263060251 +0000 UTC m=+0.180121200 container remove 34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_torvalds, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:57:14 np0005532761 systemd[1]: libpod-conmon-34dc53717d5920b2f56eb59ca2551950c7b63855792041dd165eced84bfd61da.scope: Deactivated successfully.
Nov 23 15:57:14 np0005532761 podman[216736]: 2025-11-23 20:57:14.4159317 +0000 UTC m=+0.037167624 container create d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 23 15:57:14 np0005532761 python3.9[216721]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:14 np0005532761 systemd[1]: Started libpod-conmon-d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c.scope.
Nov 23 15:57:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:57:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf45c6ed9431df028142551f393779d89f9d9e096407a995e6ac1902048b86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf45c6ed9431df028142551f393779d89f9d9e096407a995e6ac1902048b86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf45c6ed9431df028142551f393779d89f9d9e096407a995e6ac1902048b86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf45c6ed9431df028142551f393779d89f9d9e096407a995e6ac1902048b86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf45c6ed9431df028142551f393779d89f9d9e096407a995e6ac1902048b86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:14 np0005532761 podman[216736]: 2025-11-23 20:57:14.400559399 +0000 UTC m=+0.021795353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:57:14 np0005532761 podman[216736]: 2025-11-23 20:57:14.50564082 +0000 UTC m=+0.126876744 container init d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 15:57:14 np0005532761 podman[216736]: 2025-11-23 20:57:14.512269628 +0000 UTC m=+0.133505552 container start d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jackson, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 15:57:14 np0005532761 podman[216736]: 2025-11-23 20:57:14.515253378 +0000 UTC m=+0.136489332 container attach d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jackson, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:57:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:57:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:57:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:14.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:14 np0005532761 festive_jackson[216754]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:57:14 np0005532761 festive_jackson[216754]: --> All data devices are unavailable
Nov 23 15:57:14 np0005532761 systemd[1]: libpod-d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c.scope: Deactivated successfully.
Nov 23 15:57:14 np0005532761 podman[216736]: 2025-11-23 20:57:14.838039507 +0000 UTC m=+0.459275431 container died d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:57:14 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a6cf45c6ed9431df028142551f393779d89f9d9e096407a995e6ac1902048b86-merged.mount: Deactivated successfully.
Nov 23 15:57:14 np0005532761 podman[216736]: 2025-11-23 20:57:14.884680104 +0000 UTC m=+0.505916048 container remove d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jackson, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 15:57:14 np0005532761 systemd[1]: libpod-conmon-d85c0deb451c1b64fa3b8d9b6a4935bf431fa8b548c129bc7b37ff2120c76d9c.scope: Deactivated successfully.
Nov 23 15:57:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:57:15 np0005532761 python3.9[216921]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:15 np0005532761 podman[217051]: 2025-11-23 20:57:15.391139986 +0000 UTC m=+0.033520498 container create eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_herschel, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:57:15 np0005532761 systemd[1]: Started libpod-conmon-eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a.scope.
Nov 23 15:57:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:57:15 np0005532761 podman[217051]: 2025-11-23 20:57:15.458004236 +0000 UTC m=+0.100384778 container init eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_herschel, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:57:15 np0005532761 podman[217051]: 2025-11-23 20:57:15.464068098 +0000 UTC m=+0.106448610 container start eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:57:15 np0005532761 relaxed_herschel[217066]: 167 167
Nov 23 15:57:15 np0005532761 systemd[1]: libpod-eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a.scope: Deactivated successfully.
Nov 23 15:57:15 np0005532761 podman[217051]: 2025-11-23 20:57:15.467044517 +0000 UTC m=+0.109425029 container attach eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:57:15 np0005532761 conmon[217066]: conmon eefa342260760b297f0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a.scope/container/memory.events
Nov 23 15:57:15 np0005532761 podman[217051]: 2025-11-23 20:57:15.469567305 +0000 UTC m=+0.111947837 container died eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_herschel, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 15:57:15 np0005532761 podman[217051]: 2025-11-23 20:57:15.3767006 +0000 UTC m=+0.019081142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:57:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e794d8d5691d9e7f2b7f3853c8180820b30fd731107354fd88f0ff7d0bc07c13-merged.mount: Deactivated successfully.
Nov 23 15:57:15 np0005532761 podman[217051]: 2025-11-23 20:57:15.506333178 +0000 UTC m=+0.148713680 container remove eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:57:15 np0005532761 systemd[1]: libpod-conmon-eefa342260760b297f0d9bc7b8e6fd3354945c673c3c9b8ced4c4e0b1dcd4d7a.scope: Deactivated successfully.
Nov 23 15:57:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:15 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:15 np0005532761 podman[217088]: 2025-11-23 20:57:15.664975353 +0000 UTC m=+0.048200661 container create e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:57:15 np0005532761 systemd[1]: Started libpod-conmon-e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1.scope.
Nov 23 15:57:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:57:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6265c0d42a152da2538814a1513e8cb9aa5481a744048840337fa3a362ca4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6265c0d42a152da2538814a1513e8cb9aa5481a744048840337fa3a362ca4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6265c0d42a152da2538814a1513e8cb9aa5481a744048840337fa3a362ca4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6265c0d42a152da2538814a1513e8cb9aa5481a744048840337fa3a362ca4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:15 np0005532761 podman[217088]: 2025-11-23 20:57:15.640016266 +0000 UTC m=+0.023241664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:57:15 np0005532761 podman[217088]: 2025-11-23 20:57:15.742111008 +0000 UTC m=+0.125336406 container init e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elgamal, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:57:15 np0005532761 podman[217088]: 2025-11-23 20:57:15.748610671 +0000 UTC m=+0.131835979 container start e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elgamal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 15:57:15 np0005532761 podman[217088]: 2025-11-23 20:57:15.752215968 +0000 UTC m=+0.135441276 container attach e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elgamal, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]: {
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:    "1": [
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:        {
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "devices": [
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "/dev/loop3"
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            ],
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "lv_name": "ceph_lv0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "lv_size": "21470642176",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "name": "ceph_lv0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "tags": {
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.cluster_name": "ceph",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.crush_device_class": "",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.encrypted": "0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.osd_id": "1",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.type": "block",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.vdo": "0",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:                "ceph.with_tpm": "0"
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            },
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "type": "block",
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:            "vg_name": "ceph_vg0"
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:        }
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]:    ]
Nov 23 15:57:16 np0005532761 ecstatic_elgamal[217105]: }
Nov 23 15:57:16 np0005532761 systemd[1]: libpod-e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1.scope: Deactivated successfully.
Nov 23 15:57:16 np0005532761 podman[217088]: 2025-11-23 20:57:16.048731502 +0000 UTC m=+0.431956810 container died e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:57:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-6a6265c0d42a152da2538814a1513e8cb9aa5481a744048840337fa3a362ca4b-merged.mount: Deactivated successfully.
Nov 23 15:57:16 np0005532761 podman[217088]: 2025-11-23 20:57:16.091470925 +0000 UTC m=+0.474696233 container remove e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:57:16 np0005532761 systemd[1]: libpod-conmon-e97f056c37674a63c04ab3c2d9e9de0b2a959d02d31c5d2bfd7c67f00d86dca1.scope: Deactivated successfully.
Nov 23 15:57:16 np0005532761 python3.9[217278]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:16 np0005532761 systemd[1]: Reloading.
Nov 23 15:57:16 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:16 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:16 np0005532761 podman[217344]: 2025-11-23 20:57:16.63505776 +0000 UTC m=+0.053732378 container create 115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:57:16 np0005532761 podman[217344]: 2025-11-23 20:57:16.618493126 +0000 UTC m=+0.037167764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:57:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:16.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:16 np0005532761 systemd[1]: Started libpod-conmon-115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb.scope.
Nov 23 15:57:16 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:57:16 np0005532761 systemd[1]: Starting libvirt logging daemon socket...
Nov 23 15:57:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:57:16 np0005532761 systemd[1]: Listening on libvirt logging daemon socket.
Nov 23 15:57:16 np0005532761 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 23 15:57:16 np0005532761 podman[217344]: 2025-11-23 20:57:16.91496806 +0000 UTC m=+0.333642708 container init 115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 15:57:16 np0005532761 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 23 15:57:16 np0005532761 systemd[1]: Starting libvirt logging daemon...
Nov 23 15:57:16 np0005532761 podman[217344]: 2025-11-23 20:57:16.923522619 +0000 UTC m=+0.342197237 container start 115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:57:16 np0005532761 amazing_faraday[217396]: 167 167
Nov 23 15:57:16 np0005532761 systemd[1]: libpod-115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb.scope: Deactivated successfully.
Nov 23 15:57:16 np0005532761 podman[217344]: 2025-11-23 20:57:16.934158154 +0000 UTC m=+0.352832822 container attach 115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:57:16 np0005532761 podman[217344]: 2025-11-23 20:57:16.934527653 +0000 UTC m=+0.353202271 container died 115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 15:57:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a1d72fdbd201aa487478217eaeb47339ac94c0538522c21d2604f819b5e9723f-merged.mount: Deactivated successfully.
Nov 23 15:57:16 np0005532761 podman[217344]: 2025-11-23 20:57:16.9807288 +0000 UTC m=+0.399403418 container remove 115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_faraday, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:57:16 np0005532761 systemd[1]: libpod-conmon-115482ce058f840f158bb96b8f872bf4b4ebeadbee17fe5f43519110602c24fb.scope: Deactivated successfully.
Nov 23 15:57:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:17.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:57:17 np0005532761 systemd[1]: Started libvirt logging daemon.
Nov 23 15:57:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:17 np0005532761 podman[217428]: 2025-11-23 20:57:17.133920139 +0000 UTC m=+0.042983271 container create b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 15:57:17 np0005532761 systemd[1]: Started libpod-conmon-b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d.scope.
Nov 23 15:57:17 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:57:17 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8785bacac57e605667f0194e9cd8a62f560852f463db9371937809588b6e1d85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:17 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8785bacac57e605667f0194e9cd8a62f560852f463db9371937809588b6e1d85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:17 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8785bacac57e605667f0194e9cd8a62f560852f463db9371937809588b6e1d85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:17 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8785bacac57e605667f0194e9cd8a62f560852f463db9371937809588b6e1d85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:17 np0005532761 podman[217428]: 2025-11-23 20:57:17.114533211 +0000 UTC m=+0.023596363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:57:17 np0005532761 podman[217428]: 2025-11-23 20:57:17.222635213 +0000 UTC m=+0.131698425 container init b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:57:17 np0005532761 podman[217428]: 2025-11-23 20:57:17.231545681 +0000 UTC m=+0.140608803 container start b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 15:57:17 np0005532761 podman[217428]: 2025-11-23 20:57:17.235254951 +0000 UTC m=+0.144318083 container attach b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:57:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:17.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:17 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:57:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:57:17 np0005532761 python3.9[217620]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:17 np0005532761 systemd[1]: Reloading.
Nov 23 15:57:17 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:17 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:17 np0005532761 busy_bhabha[217469]: {}
Nov 23 15:57:17 np0005532761 podman[217428]: 2025-11-23 20:57:17.950218892 +0000 UTC m=+0.859282014 container died b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bhabha, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:57:18 np0005532761 systemd[1]: libpod-b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d.scope: Deactivated successfully.
Nov 23 15:57:18 np0005532761 systemd[1]: libpod-b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d.scope: Consumed 1.069s CPU time.
Nov 23 15:57:18 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8785bacac57e605667f0194e9cd8a62f560852f463db9371937809588b6e1d85-merged.mount: Deactivated successfully.
Nov 23 15:57:18 np0005532761 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 23 15:57:18 np0005532761 podman[217428]: 2025-11-23 20:57:18.140941706 +0000 UTC m=+1.050004818 container remove b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bhabha, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:57:18 np0005532761 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 23 15:57:18 np0005532761 systemd[1]: libpod-conmon-b99c7c22c285bc1cd1b881de769121e81751728dfe59fe833496aaa1a81b044d.scope: Deactivated successfully.
Nov 23 15:57:18 np0005532761 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 23 15:57:18 np0005532761 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 23 15:57:18 np0005532761 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:57:18 np0005532761 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 23 15:57:18 np0005532761 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 23 15:57:18 np0005532761 systemd[1]: Starting libvirt nodedev daemon...
Nov 23 15:57:18 np0005532761 systemd[1]: Started libvirt nodedev daemon.
Nov 23 15:57:18 np0005532761 lvm[217733]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:57:18 np0005532761 lvm[217733]: VG ceph_vg0 finished
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:18 np0005532761 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 23 15:57:18 np0005532761 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 23 15:57:18 np0005532761 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 23 15:57:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:57:18 np0005532761 python3.9[217940]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:18 np0005532761 systemd[1]: Reloading.
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:18 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:19 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:19 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:19 np0005532761 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 23 15:57:19 np0005532761 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 23 15:57:19 np0005532761 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 23 15:57:19 np0005532761 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 23 15:57:19 np0005532761 systemd[1]: Starting libvirt proxy daemon...
Nov 23 15:57:19 np0005532761 systemd[1]: Started libvirt proxy daemon.
Nov 23 15:57:19 np0005532761 setroubleshoot[217725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0b3695d9-8000-44f4-87ba-b1242683e841
Nov 23 15:57:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:19 np0005532761 setroubleshoot[217725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 23 15:57:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:19.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:19 np0005532761 setroubleshoot[217725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0b3695d9-8000-44f4-87ba-b1242683e841
Nov 23 15:57:19 np0005532761 setroubleshoot[217725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 23 15:57:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:19 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:20 np0005532761 python3.9[218154]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:20 np0005532761 systemd[1]: Reloading.
Nov 23 15:57:20 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:20 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:20 np0005532761 systemd[1]: Listening on libvirt locking daemon socket.
Nov 23 15:57:20 np0005532761 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 23 15:57:20 np0005532761 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 23 15:57:20 np0005532761 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 23 15:57:20 np0005532761 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 23 15:57:20 np0005532761 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 23 15:57:20 np0005532761 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 23 15:57:20 np0005532761 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 23 15:57:20 np0005532761 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 23 15:57:20 np0005532761 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 23 15:57:20 np0005532761 systemd[1]: Starting libvirt QEMU daemon...
Nov 23 15:57:20 np0005532761 systemd[1]: Started libvirt QEMU daemon.
Nov 23 15:57:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:20.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:57:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:21 np0005532761 python3.9[218396]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:21 np0005532761 systemd[1]: Reloading.
Nov 23 15:57:21 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:21 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:21.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:21 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:21 np0005532761 systemd[1]: Starting libvirt secret daemon socket...
Nov 23 15:57:21 np0005532761 systemd[1]: Listening on libvirt secret daemon socket.
Nov 23 15:57:21 np0005532761 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 23 15:57:21 np0005532761 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 23 15:57:21 np0005532761 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 23 15:57:21 np0005532761 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 23 15:57:21 np0005532761 systemd[1]: Starting libvirt secret daemon...
Nov 23 15:57:21 np0005532761 systemd[1]: Started libvirt secret daemon.
Nov 23 15:57:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:22.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:57:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:23.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:23 np0005532761 python3.9[218612]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:23 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205724 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:57:24 np0005532761 python3.9[218764]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:57:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:24.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:57:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:25 np0005532761 podman[218890]: 2025-11-23 20:57:25.228766283 +0000 UTC m=+0.063604622 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 15:57:25 np0005532761 python3.9[218931]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:25.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:25 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:26 np0005532761 python3.9[219091]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:57:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:26.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 23 15:57:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:27.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:57:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:27 np0005532761 python3.9[219243]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:27 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:27] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:57:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:27] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:57:27 np0005532761 python3.9[219364]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931446.9199774-3359-172474139484745/.source.xml follow=False _original_basename=secret.xml.j2 checksum=2095b2efdb764c083af64051baa9ed5d4618fea0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:28 np0005532761 python3.9[219517]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 03808be8-ae4a-5548-82e6-4a294f1bc627#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 23 15:57:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:29.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:29 np0005532761 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 23 15:57:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:29 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:29 np0005532761 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 23 15:57:29 np0005532761 python3.9[219680]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:30 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:57:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:57:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:31.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:31 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:32 np0005532761 python3.9[220145]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:32.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:57:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:57:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:57:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:57:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:57:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:57:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:57:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:57:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:57:33 np0005532761 python3.9[220299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:33.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:57:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:33 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:57:33 np0005532761 python3.9[220422]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931452.840158-3524-13545727825733/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:34.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 15:57:35 np0005532761 python3.9[220575]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:35.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:35 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:35 np0005532761 python3.9[220728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:36 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:57:36 np0005532761 python3.9[220806]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 15:57:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:37.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:57:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:37.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:57:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:37 np0005532761 python3.9[220961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:37 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:37 np0005532761 python3.9[221039]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.dq13mj91 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:37] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:57:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:37] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:57:38 np0005532761 python3.9[221191]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:38.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 15:57:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:57:39 np0005532761 python3.9[221271]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:39.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:39 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:40 np0005532761 python3.9[221423]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 23 15:57:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:41 np0005532761 python3[221603]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:57:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:41.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:41 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:42 np0005532761 python3.9[221755]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:42 np0005532761 python3.9[221833]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:42.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:57:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa400c5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:43 np0005532761 podman[221959]: 2025-11-23 20:57:43.35372099 +0000 UTC m=+0.089107654 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 15:57:43 np0005532761 python3.9[222006]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:43.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:43 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205743 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:57:43 np0005532761 python3.9[222090]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205744 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:57:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:57:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:45 np0005532761 python3.9[222244]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:45.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:45 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:45 np0005532761 python3.9[222322]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:46 np0005532761 python3.9[222475]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:57:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:47.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:57:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:47.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:57:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:47.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:57:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:47 np0005532761 python3.9[222554]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:47.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:47 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:47] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:57:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:47] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:57:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:57:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:57:48 np0005532761 python3.9[222706]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:57:49 np0005532761 python3.9[222832]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931467.700397-3899-118076195401501/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:49.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:49 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:50 np0005532761 python3.9[222987]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:50 np0005532761 python3.9[223140]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:57:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:51.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:51 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:57:51.853 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:57:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:57:51.853 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:57:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:57:51.853 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:57:51 np0005532761 python3.9[223296]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:52.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:52 np0005532761 python3.9[223449]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:57:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:53 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:53 np0005532761 python3.9[223603]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:57:54 np0005532761 python3.9[223758]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:54.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:57:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:55 np0005532761 podman[223914]: 2025-11-23 20:57:55.375654007 +0000 UTC m=+0.075754198 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 15:57:55 np0005532761 python3.9[223915]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:55.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:55 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:56 np0005532761 python3.9[224086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:57:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:57:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:57:56 np0005532761 python3.9[224210]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931475.9583964-4115-40354496261447/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:57:57.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:57:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:57 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:57] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 23 15:57:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:57:57] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 23 15:57:58 np0005532761 python3.9[224365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:58 np0005532761 python3.9[224488]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931477.5696177-4160-251523144300901/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:58.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:57:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:59 np0005532761 python3.9[224642]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:57:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:57:59 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:00 np0005532761 python3.9[224765]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931478.9793117-4205-218371566444597/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:00.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:58:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:01 np0005532761 python3.9[224943]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:58:01 np0005532761 systemd[1]: Reloading.
Nov 23 15:58:01 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:01 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:01 np0005532761 systemd[1]: Reached target edpm_libvirt.target.
Nov 23 15:58:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:01.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:01 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:02 np0005532761 python3.9[225137]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 15:58:02 np0005532761 systemd[1]: Reloading.
Nov 23 15:58:02 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:02 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:02.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:58:02 np0005532761 systemd[1]: Reloading.
Nov 23 15:58:03 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:03 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:58:03
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.nfs', '.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'volumes', 'images', 'vms']
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:58:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:58:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:58:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:58:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:03.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:03 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:03 np0005532761 systemd[1]: session-53.scope: Deactivated successfully.
Nov 23 15:58:03 np0005532761 systemd[1]: session-53.scope: Consumed 3min 19.653s CPU time.
Nov 23 15:58:03 np0005532761 systemd-logind[820]: Session 53 logged out. Waiting for processes to exit.
Nov 23 15:58:03 np0005532761 systemd-logind[820]: Removed session 53.
Nov 23 15:58:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:04.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 15:58:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:05.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:05 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf98004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:06.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:58:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:07.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:58:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:07.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:58:07 np0005532761 kernel: ganesha.nfsd[222835]: segfault at 50 ip 00007fe0590d732e sp 00007fe022ffc210 error 4 in libntirpc.so.5.8[7fe0590bc000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 23 15:58:07 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:58:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[170170]: 23/11/2025 20:58:07 : epoch 6923744e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf80003820 fd 48 proxy ignored for local
Nov 23 15:58:07 np0005532761 systemd[1]: Started Process Core Dump (PID 225243/UID 0).
Nov 23 15:58:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:07.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:07] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:58:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:07] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:58:08 np0005532761 systemd-coredump[225244]: Process 170198 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 70:#012#0  0x00007fe0590d732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:58:08 np0005532761 systemd[1]: systemd-coredump@6-225243-0.service: Deactivated successfully.
Nov 23 15:58:08 np0005532761 systemd[1]: systemd-coredump@6-225243-0.service: Consumed 1.108s CPU time.
Nov 23 15:58:08 np0005532761 podman[225249]: 2025-11-23 20:58:08.358885346 +0000 UTC m=+0.032936957 container died 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:58:08 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f39f8371b014f6e2159fd72ec274ec402143fc7178fdbdfb90e260ce1d38820c-merged.mount: Deactivated successfully.
Nov 23 15:58:08 np0005532761 podman[225249]: 2025-11-23 20:58:08.400596327 +0000 UTC m=+0.074647928 container remove 0ce66092bfc793c9b7f597d9b7359c45837a8c1664b9f1ff66feced8c3604c1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:58:08 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:58:08 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:58:08 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.803s CPU time.
Nov 23 15:58:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:08.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:58:09 np0005532761 systemd-logind[820]: New session 54 of user zuul.
Nov 23 15:58:09 np0005532761 systemd[1]: Started Session 54 of User zuul.
Nov 23 15:58:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:09.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:10 np0005532761 python3.9[225446]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:58:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:10.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:58:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:11.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:12 np0005532761 python3.9[225602]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:58:12 np0005532761 network[225619]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:58:12 np0005532761 network[225620]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:58:12 np0005532761 network[225621]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:58:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:12.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:58:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205813 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:58:13 np0005532761 podman[225642]: 2025-11-23 20:58:13.582465348 +0000 UTC m=+0.173218864 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 23 15:58:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:13.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:14.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:58:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:16.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:58:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:17.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:58:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:17.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:17] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:58:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:17] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 15:58:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:58:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:58:18 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 7.
Nov 23 15:58:18 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:58:18 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.803s CPU time.
Nov 23 15:58:18 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:58:18 np0005532761 python3.9[225927]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:58:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:58:18 np0005532761 podman[226032]: 2025-11-23 20:58:18.842303704 +0000 UTC m=+0.027701098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:58:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:58:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:19.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:58:20 np0005532761 python3.9[226138]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:58:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:58:20 np0005532761 podman[226032]: 2025-11-23 20:58:20.976559269 +0000 UTC m=+2.161956633 container create c5a77f6afc5079c3074230d6969dd03013e06ebda29652d80d4b6a16895ed594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:58:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:21.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197e0a4ea4fc929dfc2864a238e93b67bbd10d1f6d3e3912587de61fad6aae0a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197e0a4ea4fc929dfc2864a238e93b67bbd10d1f6d3e3912587de61fad6aae0a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197e0a4ea4fc929dfc2864a238e93b67bbd10d1f6d3e3912587de61fad6aae0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197e0a4ea4fc929dfc2864a238e93b67bbd10d1f6d3e3912587de61fad6aae0a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:22 np0005532761 podman[226032]: 2025-11-23 20:58:22.198789026 +0000 UTC m=+3.384186380 container init c5a77f6afc5079c3074230d6969dd03013e06ebda29652d80d4b6a16895ed594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:58:22 np0005532761 podman[226032]: 2025-11-23 20:58:22.206679906 +0000 UTC m=+3.392077240 container start c5a77f6afc5079c3074230d6969dd03013e06ebda29652d80d4b6a16895ed594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:58:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:22 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:58:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:22 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:58:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:58:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:58:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:58:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:58:22 np0005532761 bash[226032]: c5a77f6afc5079c3074230d6969dd03013e06ebda29652d80d4b6a16895ed594
Nov 23 15:58:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:58:22 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:58:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:22.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:58:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:58:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:58:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:58:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:58:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:58:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:58:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:58:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:23.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:24 np0005532761 podman[226327]: 2025-11-23 20:58:24.010274344 +0000 UTC m=+0.040297283 container create 1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 15:58:24 np0005532761 systemd[1]: Started libpod-conmon-1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb.scope.
Nov 23 15:58:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:58:24 np0005532761 podman[226327]: 2025-11-23 20:58:23.993589351 +0000 UTC m=+0.023612310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:58:24 np0005532761 podman[226327]: 2025-11-23 20:58:24.100351234 +0000 UTC m=+0.130374213 container init 1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mcnulty, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:58:24 np0005532761 podman[226327]: 2025-11-23 20:58:24.117375327 +0000 UTC m=+0.147398266 container start 1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:58:24 np0005532761 podman[226327]: 2025-11-23 20:58:24.120747716 +0000 UTC m=+0.150770675 container attach 1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mcnulty, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 15:58:24 np0005532761 systemd[1]: libpod-1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb.scope: Deactivated successfully.
Nov 23 15:58:24 np0005532761 jolly_mcnulty[226343]: 167 167
Nov 23 15:58:24 np0005532761 podman[226327]: 2025-11-23 20:58:24.127094746 +0000 UTC m=+0.157117755 container died 1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:58:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3ed39d8b5a22ddf388fdf42062773028d1f23261c05199df77e05eb1bd7e51a0-merged.mount: Deactivated successfully.
Nov 23 15:58:24 np0005532761 podman[226327]: 2025-11-23 20:58:24.184007372 +0000 UTC m=+0.214030331 container remove 1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mcnulty, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:58:24 np0005532761 systemd[1]: libpod-conmon-1ab635c002a2af929f29cea7f444c49a19122941fa61eceaeefc3c7415af62bb.scope: Deactivated successfully.
Nov 23 15:58:24 np0005532761 podman[226367]: 2025-11-23 20:58:24.374229516 +0000 UTC m=+0.046977991 container create 5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:58:24 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:24 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:24 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:58:24 np0005532761 systemd[1]: Started libpod-conmon-5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac.scope.
Nov 23 15:58:24 np0005532761 podman[226367]: 2025-11-23 20:58:24.352649132 +0000 UTC m=+0.025397657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:58:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:58:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06e833185ad21f6fcd586c9b6f61a45ac86223cfb048616aedc823434ca1d5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06e833185ad21f6fcd586c9b6f61a45ac86223cfb048616aedc823434ca1d5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06e833185ad21f6fcd586c9b6f61a45ac86223cfb048616aedc823434ca1d5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06e833185ad21f6fcd586c9b6f61a45ac86223cfb048616aedc823434ca1d5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06e833185ad21f6fcd586c9b6f61a45ac86223cfb048616aedc823434ca1d5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:24 np0005532761 podman[226367]: 2025-11-23 20:58:24.482730326 +0000 UTC m=+0.155478831 container init 5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:58:24 np0005532761 podman[226367]: 2025-11-23 20:58:24.491961401 +0000 UTC m=+0.164709876 container start 5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:58:24 np0005532761 podman[226367]: 2025-11-23 20:58:24.4956517 +0000 UTC m=+0.168400205 container attach 5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 15:58:24 np0005532761 peaceful_golick[226383]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:58:24 np0005532761 peaceful_golick[226383]: --> All data devices are unavailable
Nov 23 15:58:24 np0005532761 systemd[1]: libpod-5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac.scope: Deactivated successfully.
Nov 23 15:58:24 np0005532761 podman[226367]: 2025-11-23 20:58:24.836387013 +0000 UTC m=+0.509135508 container died 5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:58:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d06e833185ad21f6fcd586c9b6f61a45ac86223cfb048616aedc823434ca1d5c-merged.mount: Deactivated successfully.
Nov 23 15:58:24 np0005532761 podman[226367]: 2025-11-23 20:58:24.886879629 +0000 UTC m=+0.559628104 container remove 5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:58:24 np0005532761 systemd[1]: libpod-conmon-5e1eb6b45040622e2c0224b2ef6899fd125383e740968664386b3b56f19f54ac.scope: Deactivated successfully.
Nov 23 15:58:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:58:25 np0005532761 podman[226504]: 2025-11-23 20:58:25.486533477 +0000 UTC m=+0.051844791 container create b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:58:25 np0005532761 systemd[1]: Started libpod-conmon-b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353.scope.
Nov 23 15:58:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:58:25 np0005532761 podman[226504]: 2025-11-23 20:58:25.462712833 +0000 UTC m=+0.028024167 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:58:25 np0005532761 podman[226504]: 2025-11-23 20:58:25.565741136 +0000 UTC m=+0.131052460 container init b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_chatelet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:58:25 np0005532761 podman[226516]: 2025-11-23 20:58:25.570941535 +0000 UTC m=+0.074451514 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 15:58:25 np0005532761 podman[226504]: 2025-11-23 20:58:25.576912653 +0000 UTC m=+0.142223967 container start b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 23 15:58:25 np0005532761 vigilant_chatelet[226530]: 167 167
Nov 23 15:58:25 np0005532761 systemd[1]: libpod-b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353.scope: Deactivated successfully.
Nov 23 15:58:25 np0005532761 podman[226504]: 2025-11-23 20:58:25.583983102 +0000 UTC m=+0.149294426 container attach b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:58:25 np0005532761 podman[226504]: 2025-11-23 20:58:25.584394283 +0000 UTC m=+0.149705577 container died b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_chatelet, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:58:25 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d01cb607b5b14fdf7170a10865029b5bb765d79588780a39c3777b3031ed0c0f-merged.mount: Deactivated successfully.
Nov 23 15:58:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:25.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:25 np0005532761 podman[226504]: 2025-11-23 20:58:25.637569079 +0000 UTC m=+0.202880403 container remove b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_chatelet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:58:25 np0005532761 systemd[1]: libpod-conmon-b948f7283be887903e84d489444b0d3cb048a3124a9f15a0be89c168063f5353.scope: Deactivated successfully.
Nov 23 15:58:25 np0005532761 podman[226564]: 2025-11-23 20:58:25.810089823 +0000 UTC m=+0.052633323 container create c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:58:25 np0005532761 systemd[1]: Started libpod-conmon-c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66.scope.
Nov 23 15:58:25 np0005532761 podman[226564]: 2025-11-23 20:58:25.78219331 +0000 UTC m=+0.024736860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:58:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:58:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d7d9d7e5975717db3fe509364666856116545145750ca6d56bd9b1897bdf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d7d9d7e5975717db3fe509364666856116545145750ca6d56bd9b1897bdf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d7d9d7e5975717db3fe509364666856116545145750ca6d56bd9b1897bdf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d7d9d7e5975717db3fe509364666856116545145750ca6d56bd9b1897bdf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:25 np0005532761 podman[226564]: 2025-11-23 20:58:25.921444258 +0000 UTC m=+0.163987788 container init c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:58:25 np0005532761 podman[226564]: 2025-11-23 20:58:25.929188875 +0000 UTC m=+0.171732375 container start c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:58:25 np0005532761 podman[226564]: 2025-11-23 20:58:25.936859529 +0000 UTC m=+0.179403059 container attach c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]: {
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:    "1": [
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:        {
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "devices": [
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "/dev/loop3"
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            ],
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "lv_name": "ceph_lv0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "lv_size": "21470642176",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "name": "ceph_lv0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "tags": {
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.cluster_name": "ceph",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.crush_device_class": "",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.encrypted": "0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.osd_id": "1",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.type": "block",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.vdo": "0",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:                "ceph.with_tpm": "0"
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            },
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "type": "block",
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:            "vg_name": "ceph_vg0"
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:        }
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]:    ]
Nov 23 15:58:26 np0005532761 stoic_johnson[226580]: }
Nov 23 15:58:26 np0005532761 systemd[1]: libpod-c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66.scope: Deactivated successfully.
Nov 23 15:58:26 np0005532761 podman[226564]: 2025-11-23 20:58:26.220704287 +0000 UTC m=+0.463247827 container died c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:58:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a37d7d9d7e5975717db3fe509364666856116545145750ca6d56bd9b1897bdf0-merged.mount: Deactivated successfully.
Nov 23 15:58:26 np0005532761 podman[226564]: 2025-11-23 20:58:26.279994777 +0000 UTC m=+0.522538317 container remove c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_johnson, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 15:58:26 np0005532761 systemd[1]: libpod-conmon-c73e5103874c4ac75c8bf2b39e735ab6d1e29c3c20f4dc72270a14fab7298d66.scope: Deactivated successfully.
Nov 23 15:58:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:26.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:26 np0005532761 podman[226694]: 2025-11-23 20:58:26.864075961 +0000 UTC m=+0.044689482 container create 5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 15:58:26 np0005532761 systemd[1]: Started libpod-conmon-5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e.scope.
Nov 23 15:58:26 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:58:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:58:26 np0005532761 podman[226694]: 2025-11-23 20:58:26.846129393 +0000 UTC m=+0.026742914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:58:26 np0005532761 podman[226694]: 2025-11-23 20:58:26.947989975 +0000 UTC m=+0.128603496 container init 5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 23 15:58:26 np0005532761 podman[226694]: 2025-11-23 20:58:26.954874979 +0000 UTC m=+0.135488480 container start 5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:58:26 np0005532761 podman[226694]: 2025-11-23 20:58:26.958339061 +0000 UTC m=+0.138952582 container attach 5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:58:26 np0005532761 modest_khayyam[226716]: 167 167
Nov 23 15:58:26 np0005532761 systemd[1]: libpod-5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e.scope: Deactivated successfully.
Nov 23 15:58:26 np0005532761 podman[226694]: 2025-11-23 20:58:26.960408385 +0000 UTC m=+0.141021896 container died 5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Nov 23 15:58:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2f49dab1afaa438f4b4a686dd1c144e462742a2b5894730cfafe61cbf9d5b70d-merged.mount: Deactivated successfully.
Nov 23 15:58:26 np0005532761 podman[226694]: 2025-11-23 20:58:26.996402524 +0000 UTC m=+0.177016025 container remove 5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_khayyam, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:58:27 np0005532761 systemd[1]: libpod-conmon-5bc8664b3c91eb8535c350a6bcc0dfc197e24a0b8093264ac22737ce10fa5a8e.scope: Deactivated successfully.
Nov 23 15:58:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:27.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:58:27 np0005532761 podman[226811]: 2025-11-23 20:58:27.151959056 +0000 UTC m=+0.043224712 container create 1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 15:58:27 np0005532761 systemd[1]: Started libpod-conmon-1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128.scope.
Nov 23 15:58:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:58:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1b798ff6ef2271fb24c5ec20c9c548300b10b93caa553c43c3976f6dc4fdd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1b798ff6ef2271fb24c5ec20c9c548300b10b93caa553c43c3976f6dc4fdd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1b798ff6ef2271fb24c5ec20c9c548300b10b93caa553c43c3976f6dc4fdd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1b798ff6ef2271fb24c5ec20c9c548300b10b93caa553c43c3976f6dc4fdd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:58:27 np0005532761 podman[226811]: 2025-11-23 20:58:27.13744376 +0000 UTC m=+0.028709446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:58:27 np0005532761 podman[226811]: 2025-11-23 20:58:27.242505798 +0000 UTC m=+0.133771474 container init 1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 15:58:27 np0005532761 podman[226811]: 2025-11-23 20:58:27.248453326 +0000 UTC m=+0.139718972 container start 1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_volhard, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 23 15:58:27 np0005532761 podman[226811]: 2025-11-23 20:58:27.251406104 +0000 UTC m=+0.142671760 container attach 1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_volhard, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:58:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:27.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:27 np0005532761 python3.9[226930]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:58:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:58:27 np0005532761 lvm[227001]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:58:27 np0005532761 lvm[227001]: VG ceph_vg0 finished
Nov 23 15:58:27 np0005532761 eager_volhard[226827]: {}
Nov 23 15:58:27 np0005532761 systemd[1]: libpod-1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128.scope: Deactivated successfully.
Nov 23 15:58:27 np0005532761 podman[226811]: 2025-11-23 20:58:27.939012965 +0000 UTC m=+0.830278621 container died 1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Nov 23 15:58:27 np0005532761 systemd[1]: libpod-1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128.scope: Consumed 1.079s CPU time.
Nov 23 15:58:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8b1b798ff6ef2271fb24c5ec20c9c548300b10b93caa553c43c3976f6dc4fdd0-merged.mount: Deactivated successfully.
Nov 23 15:58:27 np0005532761 podman[226811]: 2025-11-23 20:58:27.985846703 +0000 UTC m=+0.877112359 container remove 1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:58:27 np0005532761 systemd[1]: libpod-conmon-1ef5b258f59dae8b1962097714b36e8f40b0badaf0bc5d69acc4a3e238e5d128.scope: Deactivated successfully.
Nov 23 15:58:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:58:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:58:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:28 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:28 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:28.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:28 np0005532761 python3.9[227167]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:58:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:58:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:29 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:58:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:29 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:58:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:58:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:29.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:58:29 np0005532761 python3.9[227321]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:30 np0005532761 python3.9[227473]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:58:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:58:31 np0005532761 python3.9[227628]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:58:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:31.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:32 np0005532761 python3.9[227751]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931511.0230205-245-197170931238608/.source.iscsi _original_basename=.gebeubdb follow=False checksum=4f66b5ce33065a9b72582d277c412d4f3ecb6c30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:32.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:58:33 np0005532761 python3.9[227904]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:58:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:58:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:58:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:58:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:58:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:58:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:58:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:58:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:33.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:33 np0005532761 python3.9[228057]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:34.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:58:35 np0005532761 python3.9[228211]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:58:35 np0005532761 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:35.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:36 np0005532761 python3.9[228382]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:58:36 np0005532761 systemd[1]: Reloading.
Nov 23 15:58:36 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:36 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:58:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:37.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:58:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 15:58:37 np0005532761 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 23 15:58:37 np0005532761 systemd[1]: Starting Open-iSCSI...
Nov 23 15:58:37 np0005532761 kernel: Loading iSCSI transport class v2.0-870.
Nov 23 15:58:37 np0005532761 systemd[1]: Started Open-iSCSI.
Nov 23 15:58:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:37 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:37 np0005532761 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 23 15:58:37 np0005532761 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 23 15:58:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:37 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:37.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:58:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:58:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:37 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:38 np0005532761 python3.9[228585]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:58:38 np0005532761 network[228602]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:58:38 np0005532761 network[228603]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:58:38 np0005532761 network[228604]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:58:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:38.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:58:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205839 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:58:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:39 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:39 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:39.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:39 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:40.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:58:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:41 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:41 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:41.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:41 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.003000078s ======
Nov 23 15:58:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Nov 23 15:58:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:58:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:43 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:43 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:43.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:43 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:44 np0005532761 python3.9[228907]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:58:44 np0005532761 podman[228984]: 2025-11-23 20:58:44.565771057 +0000 UTC m=+0.086250427 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 15:58:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:44.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:58:44 np0005532761 python3.9[229086]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 23 15:58:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:45 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:45 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:45.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:45 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:45 np0005532761 python3.9[229243]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:58:46 np0005532761 python3.9[229366]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931525.3410494-476-14277126781244/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:46.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:58:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:47.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:58:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:47 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:47 np0005532761 python3.9[229520]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:47 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:47.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:58:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:58:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:47 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:58:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:58:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:48.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:48 np0005532761 python3.9[229673]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:58:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:58:48 np0005532761 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 15:58:48 np0005532761 systemd[1]: Stopped Load Kernel Modules.
Nov 23 15:58:48 np0005532761 systemd[1]: Stopping Load Kernel Modules...
Nov 23 15:58:49 np0005532761 systemd[1]: Starting Load Kernel Modules...
Nov 23 15:58:49 np0005532761 systemd[1]: Finished Load Kernel Modules.
Nov 23 15:58:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:49 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:49 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:49.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:49 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:49 np0005532761 python3.9[229830]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:58:50 np0005532761 python3.9[229985]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:50.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:58:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:51 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:51 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:51.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:51 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:58:51.854 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:58:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:58:51.854 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:58:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:58:51.854 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:58:51 np0005532761 python3.9[230138]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:52 np0005532761 python3.9[230291]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:58:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:52.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:58:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:53 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:53 np0005532761 python3.9[230415]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931532.3278522-650-194848472796328/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:53 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:53.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:53 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:54 np0005532761 python3.9[230567]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:58:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:54.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:58:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:55 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:55 np0005532761 python3.9[230722]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:55 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:55.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:55 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:56 np0005532761 podman[230846]: 2025-11-23 20:58:56.266933233 +0000 UTC m=+0.081359138 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 23 15:58:56 np0005532761 python3.9[230888]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:56.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:58:57 np0005532761 python3.9[231046]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:58:57.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:58:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:57 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:57 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:57.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:58:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:58:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 15:58:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:57 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205857 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:58:58 np0005532761 python3.9[231199]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:58 np0005532761 python3.9[231351]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:58:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4167 writes, 18K keys, 4167 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 4167 writes, 4167 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1461 writes, 5945 keys, 1461 commit groups, 1.0 writes per commit group, ingest: 10.93 MB, 0.02 MB/s#012Interval WAL: 1461 writes, 1461 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     85.5      0.35              0.06         8    0.044       0      0       0.0       0.0#012  L6      1/0   11.86 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0     95.3     79.5      1.14              0.23         7    0.162     32K   3815       0.0       0.0#012 Sum      1/0   11.86 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0     72.9     80.9      1.49              0.29        15    0.099     32K   3815       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.5     78.7     75.4      0.65              0.13         6    0.108     16K   2038       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     95.3     79.5      1.14              0.23         7    0.162     32K   3815       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     86.5      0.34              0.06         7    0.049       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.12 GB write, 0.10 MB/s write, 0.11 GB read, 0.09 MB/s read, 1.5 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf3f93d350#2 capacity: 304.00 MB usage: 5.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.0001 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(332,5.11 MB,1.67947%) FilterBlock(16,101.92 KB,0.0327411%) IndexBlock(16,197.70 KB,0.0635097%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 15:58:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:58.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:58:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:59 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:59 np0005532761 python3.9[231505]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:59 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:58:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:59.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:59 np0005532761 python3.9[231657]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:58:59 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:00.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:59:01 np0005532761 python3.9[231810]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:01 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:01 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:01.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:01 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:02 np0005532761 python3.9[231990]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:02.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:59:02 np0005532761 python3.9[232143]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_20:59:03
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', '.nfs', 'vms', 'default.rgw.log', 'images', 'backups']
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 15:59:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:03 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:59:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 15:59:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 15:59:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:03 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:03.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:03 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:03 np0005532761 python3.9[232296]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:04 np0005532761 python3.9[232374]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:04.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:59:05 np0005532761 python3.9[232527]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:05 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:05 np0005532761 python3.9[232606]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:05 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23dc003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:59:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:05.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:59:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:05 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:06 np0005532761 python3.9[232759]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:59:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:06.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:59:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:59:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:06 : epoch 6923756e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:59:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:07.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:59:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:07 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:07 np0005532761 python3.9[232914]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:07 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:59:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.744442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547744471, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1177, "num_deletes": 255, "total_data_size": 2147144, "memory_usage": 2183440, "flush_reason": "Manual Compaction"}
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547757942, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2105783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17881, "largest_seqno": 19057, "table_properties": {"data_size": 2100198, "index_size": 2977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11231, "raw_average_key_size": 18, "raw_value_size": 2089134, "raw_average_value_size": 3470, "num_data_blocks": 133, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931433, "oldest_key_time": 1763931433, "file_creation_time": 1763931547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 13805 microseconds, and 7020 cpu microseconds.
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.758245) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2105783 bytes OK
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.758360) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.760456) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.760472) EVENT_LOG_v1 {"time_micros": 1763931547760468, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.760489) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2141926, prev total WAL file size 2141926, number of live WAL files 2.
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.761984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2056KB)], [38(11MB)]
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547762057, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14540786, "oldest_snapshot_seqno": -1}
Nov 23 15:59:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:07 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4992 keys, 14063691 bytes, temperature: kUnknown
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547883053, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 14063691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14028706, "index_size": 21435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126770, "raw_average_key_size": 25, "raw_value_size": 13936587, "raw_average_value_size": 2791, "num_data_blocks": 881, "num_entries": 4992, "num_filter_entries": 4992, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763931547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.883233) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14063691 bytes
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.885088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.1 rd, 116.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.9 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(13.6) write-amplify(6.7) OK, records in: 5516, records dropped: 524 output_compression: NoCompression
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.885104) EVENT_LOG_v1 {"time_micros": 1763931547885097, "job": 18, "event": "compaction_finished", "compaction_time_micros": 121043, "compaction_time_cpu_micros": 50397, "output_level": 6, "num_output_files": 1, "total_output_size": 14063691, "num_input_records": 5516, "num_output_records": 4992, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547885582, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547887669, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.761896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.887744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.887750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.887752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.887754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-20:59:07.887756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:08 np0005532761 python3.9[232992]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:08.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:08 np0005532761 python3.9[233145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:59:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:09 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:09 np0005532761 python3.9[233224]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:09 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:09 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:09 : epoch 6923756e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:59:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:09 : epoch 6923756e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:59:10 np0005532761 python3.9[233376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:59:10 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:10 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:10 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:10.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:59:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:11 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:11 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:11 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:12 np0005532761 python3.9[233566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:12 np0005532761 python3.9[233644]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:12.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:59:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:12 : epoch 6923756e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:59:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:13 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:13 np0005532761 python3.9[233798]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:13 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:13.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:13 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:13 np0005532761 python3.9[233876]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:14 np0005532761 podman[234001]: 2025-11-23 20:59:14.853109555 +0000 UTC m=+0.124843135 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 15:59:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:14.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:59:15 np0005532761 python3.9[234050]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:59:15 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:15 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:15 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:15 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:15 np0005532761 systemd[1]: Starting Create netns directory...
Nov 23 15:59:15 np0005532761 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:59:15 np0005532761 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:59:15 np0005532761 systemd[1]: Finished Create netns directory.
Nov 23 15:59:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:15 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:15.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:15 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:16 np0005532761 python3.9[234249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:59:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:17.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:59:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:17.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:59:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:17 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:17 np0005532761 python3.9[234403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:17 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:17.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:59:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 15:59:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:17 np0005532761 python3.9[234526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931556.8517013-1271-65070344254108/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:17 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:59:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:59:18 np0005532761 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 23 15:59:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:18.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:59:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:19 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:19 np0005532761 python3.9[234681]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:19 np0005532761 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 23 15:59:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:19 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:19.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:19 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205919 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:59:20 np0005532761 python3.9[234834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:20 np0005532761 python3.9[234958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931559.758492-1346-256139003684993/.source.json _original_basename=.xy0n99qq follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:20.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 15:59:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:21 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:21 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:21 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:21 np0005532761 python3.9[235136]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:22.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:59:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:23.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:23 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:24 np0005532761 python3.9[235568]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 23 15:59:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:24.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 15:59:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:25 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:25 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:25.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:25 np0005532761 python3.9[235721]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 15:59:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:25 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:26 np0005532761 podman[235799]: 2025-11-23 20:59:26.563353194 +0000 UTC m=+0.081492312 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 15:59:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:26.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:59:26 np0005532761 python3.9[235895]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 15:59:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:27.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:59:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:27 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:27 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:27.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:59:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:59:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:27 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:28.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:59:29 np0005532761 python3[236143]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 15:59:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:29 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 15:59:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 15:59:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:29 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:29.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:29 np0005532761 podman[236278]: 2025-11-23 20:59:29.715648168 +0000 UTC m=+0.045328198 container create 8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:59:29 np0005532761 systemd[1]: Started libpod-conmon-8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c.scope.
Nov 23 15:59:29 np0005532761 podman[236278]: 2025-11-23 20:59:29.692709837 +0000 UTC m=+0.022389887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:29 np0005532761 podman[236278]: 2025-11-23 20:59:29.810949215 +0000 UTC m=+0.140629245 container init 8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:59:29 np0005532761 podman[236278]: 2025-11-23 20:59:29.817994423 +0000 UTC m=+0.147674433 container start 8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kare, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 15:59:29 np0005532761 podman[236278]: 2025-11-23 20:59:29.822475782 +0000 UTC m=+0.152155812 container attach 8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:59:29 np0005532761 systemd[1]: libpod-8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c.scope: Deactivated successfully.
Nov 23 15:59:29 np0005532761 friendly_kare[236300]: 167 167
Nov 23 15:59:29 np0005532761 conmon[236300]: conmon 8a22b0e069c6d83e8ce5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c.scope/container/memory.events
Nov 23 15:59:29 np0005532761 podman[236278]: 2025-11-23 20:59:29.8254094 +0000 UTC m=+0.155089420 container died 8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kare, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 23 15:59:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-deed201ac7e0d20c452e84446edf7ca46e14a126eaee29da6036759dcc138510-merged.mount: Deactivated successfully.
Nov 23 15:59:29 np0005532761 podman[236278]: 2025-11-23 20:59:29.864668946 +0000 UTC m=+0.194348956 container remove 8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kare, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 23 15:59:29 np0005532761 systemd[1]: libpod-conmon-8a22b0e069c6d83e8ce5da90af93e97752f8782416879116f56e632d4636125c.scope: Deactivated successfully.
Nov 23 15:59:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:29 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:59:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:59:30 np0005532761 podman[236172]: 2025-11-23 20:59:30.246786561 +0000 UTC m=+1.107519623 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 15:59:30 np0005532761 podman[236340]: 2025-11-23 20:59:30.256152161 +0000 UTC m=+0.278513798 container create 5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_germain, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:59:30 np0005532761 systemd[1]: Started libpod-conmon-5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4.scope.
Nov 23 15:59:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddf281ff961300c4aabfe2e8eb19743aefbccf6e693c42399f7e6525dbbf1f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddf281ff961300c4aabfe2e8eb19743aefbccf6e693c42399f7e6525dbbf1f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddf281ff961300c4aabfe2e8eb19743aefbccf6e693c42399f7e6525dbbf1f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:30 np0005532761 podman[236340]: 2025-11-23 20:59:30.226472421 +0000 UTC m=+0.248834078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddf281ff961300c4aabfe2e8eb19743aefbccf6e693c42399f7e6525dbbf1f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eddf281ff961300c4aabfe2e8eb19743aefbccf6e693c42399f7e6525dbbf1f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:30 np0005532761 podman[236340]: 2025-11-23 20:59:30.330118371 +0000 UTC m=+0.352480028 container init 5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 15:59:30 np0005532761 podman[236340]: 2025-11-23 20:59:30.34023481 +0000 UTC m=+0.362596447 container start 5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_germain, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 15:59:30 np0005532761 podman[236340]: 2025-11-23 20:59:30.343810516 +0000 UTC m=+0.366172153 container attach 5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 15:59:30 np0005532761 podman[236381]: 2025-11-23 20:59:30.376177197 +0000 UTC m=+0.043977862 container create c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 15:59:30 np0005532761 podman[236381]: 2025-11-23 20:59:30.354768588 +0000 UTC m=+0.022569283 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 15:59:30 np0005532761 python3[236143]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 15:59:30 np0005532761 jovial_germain[236368]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:59:30 np0005532761 jovial_germain[236368]: --> All data devices are unavailable
Nov 23 15:59:30 np0005532761 systemd[1]: libpod-5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4.scope: Deactivated successfully.
Nov 23 15:59:30 np0005532761 podman[236340]: 2025-11-23 20:59:30.674090601 +0000 UTC m=+0.696452238 container died 5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 15:59:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-eddf281ff961300c4aabfe2e8eb19743aefbccf6e693c42399f7e6525dbbf1f1-merged.mount: Deactivated successfully.
Nov 23 15:59:30 np0005532761 podman[236340]: 2025-11-23 20:59:30.722000447 +0000 UTC m=+0.744362074 container remove 5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:59:30 np0005532761 systemd[1]: libpod-conmon-5a5ec209c8aca59d5d1baf07f529202ff18791334e977a264b7ac363274990c4.scope: Deactivated successfully.
Nov 23 15:59:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:30.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 15:59:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:31 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:31 np0005532761 podman[236690]: 2025-11-23 20:59:31.343383444 +0000 UTC m=+0.049688225 container create 19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:59:31 np0005532761 systemd[1]: Started libpod-conmon-19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9.scope.
Nov 23 15:59:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:31 np0005532761 podman[236690]: 2025-11-23 20:59:31.406412102 +0000 UTC m=+0.112716903 container init 19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:59:31 np0005532761 podman[236690]: 2025-11-23 20:59:31.413741067 +0000 UTC m=+0.120045848 container start 19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 15:59:31 np0005532761 podman[236690]: 2025-11-23 20:59:31.41687143 +0000 UTC m=+0.123176211 container attach 19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 15:59:31 np0005532761 dazzling_kirch[236706]: 167 167
Nov 23 15:59:31 np0005532761 systemd[1]: libpod-19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9.scope: Deactivated successfully.
Nov 23 15:59:31 np0005532761 conmon[236706]: conmon 19c319beb256465a49bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9.scope/container/memory.events
Nov 23 15:59:31 np0005532761 podman[236690]: 2025-11-23 20:59:31.419663505 +0000 UTC m=+0.125968286 container died 19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:59:31 np0005532761 podman[236690]: 2025-11-23 20:59:31.326843523 +0000 UTC m=+0.033148334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-599c8fe1a7bac90fc6a3105e988c19e333aa3ffdaeb373192fee062e7ae133a1-merged.mount: Deactivated successfully.
Nov 23 15:59:31 np0005532761 podman[236690]: 2025-11-23 20:59:31.454470962 +0000 UTC m=+0.160775733 container remove 19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:59:31 np0005532761 python3.9[236682]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:31 np0005532761 systemd[1]: libpod-conmon-19c319beb256465a49bc09568b00188508dfa7e9187b8171c9d78d84b2deb8b9.scope: Deactivated successfully.
Nov 23 15:59:31 np0005532761 podman[236753]: 2025-11-23 20:59:31.604575529 +0000 UTC m=+0.037913571 container create 31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:59:31 np0005532761 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 23 15:59:31 np0005532761 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 15:59:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:31 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:31 np0005532761 systemd[1]: Started libpod-conmon-31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af.scope.
Nov 23 15:59:31 np0005532761 podman[236753]: 2025-11-23 20:59:31.58846151 +0000 UTC m=+0.021799582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d31c71a905a057ddc21576444cee699f863ca2c82dc2beb689f1fe102168e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d31c71a905a057ddc21576444cee699f863ca2c82dc2beb689f1fe102168e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d31c71a905a057ddc21576444cee699f863ca2c82dc2beb689f1fe102168e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d31c71a905a057ddc21576444cee699f863ca2c82dc2beb689f1fe102168e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:31.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:31 np0005532761 podman[236753]: 2025-11-23 20:59:31.708394583 +0000 UTC m=+0.141732645 container init 31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:59:31 np0005532761 podman[236753]: 2025-11-23 20:59:31.717085015 +0000 UTC m=+0.150423067 container start 31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:59:31 np0005532761 podman[236753]: 2025-11-23 20:59:31.720426584 +0000 UTC m=+0.153764666 container attach 31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:59:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:31 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]: {
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:    "1": [
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:        {
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "devices": [
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "/dev/loop3"
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            ],
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "lv_name": "ceph_lv0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "lv_size": "21470642176",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "name": "ceph_lv0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "tags": {
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.cluster_name": "ceph",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.crush_device_class": "",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.encrypted": "0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.osd_id": "1",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.type": "block",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.vdo": "0",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:                "ceph.with_tpm": "0"
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            },
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "type": "block",
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:            "vg_name": "ceph_vg0"
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:        }
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]:    ]
Nov 23 15:59:31 np0005532761 recursing_swartz[236773]: }
Nov 23 15:59:32 np0005532761 systemd[1]: libpod-31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af.scope: Deactivated successfully.
Nov 23 15:59:32 np0005532761 podman[236753]: 2025-11-23 20:59:32.015659436 +0000 UTC m=+0.448997468 container died 31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 15:59:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-47d31c71a905a057ddc21576444cee699f863ca2c82dc2beb689f1fe102168e8-merged.mount: Deactivated successfully.
Nov 23 15:59:32 np0005532761 podman[236753]: 2025-11-23 20:59:32.069013507 +0000 UTC m=+0.502351559 container remove 31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:59:32 np0005532761 systemd[1]: libpod-conmon-31785072caec338aef79876485a7ce86a8bf4ef5c68a065959f5280104a097af.scope: Deactivated successfully.
Nov 23 15:59:32 np0005532761 python3.9[236970]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:32 np0005532761 podman[237036]: 2025-11-23 20:59:32.625986219 +0000 UTC m=+0.037736676 container create e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:59:32 np0005532761 systemd[1]: Started libpod-conmon-e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b.scope.
Nov 23 15:59:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205932 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:59:32 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:32 np0005532761 podman[237036]: 2025-11-23 20:59:32.609608253 +0000 UTC m=+0.021358740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:32 np0005532761 podman[237036]: 2025-11-23 20:59:32.715217744 +0000 UTC m=+0.126968221 container init e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:59:32 np0005532761 podman[237036]: 2025-11-23 20:59:32.721474562 +0000 UTC m=+0.133225009 container start e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:59:32 np0005532761 podman[237036]: 2025-11-23 20:59:32.725251172 +0000 UTC m=+0.137001649 container attach e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:59:32 np0005532761 relaxed_lumiere[237076]: 167 167
Nov 23 15:59:32 np0005532761 systemd[1]: libpod-e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b.scope: Deactivated successfully.
Nov 23 15:59:32 np0005532761 podman[237036]: 2025-11-23 20:59:32.72930817 +0000 UTC m=+0.141058637 container died e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 15:59:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a70210d1701bb7b568ef96520894a5b629459aa8943859d5c19e3907aa391036-merged.mount: Deactivated successfully.
Nov 23 15:59:32 np0005532761 podman[237036]: 2025-11-23 20:59:32.774208296 +0000 UTC m=+0.185958763 container remove e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:59:32 np0005532761 systemd[1]: libpod-conmon-e905626a67a60e23d742b8e18f5485c32d24f3c40f68d88b95b5589cc2ae208b.scope: Deactivated successfully.
Nov 23 15:59:32 np0005532761 python3.9[237110]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:32.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:32 np0005532761 podman[237131]: 2025-11-23 20:59:32.952482843 +0000 UTC m=+0.045390030 container create 3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yonath, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:59:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 15:59:32 np0005532761 systemd[1]: Started libpod-conmon-3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8.scope.
Nov 23 15:59:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87be35672459794c1ea17df33b80b89619c6ca344e08afc1eac9bf94342a1536/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87be35672459794c1ea17df33b80b89619c6ca344e08afc1eac9bf94342a1536/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87be35672459794c1ea17df33b80b89619c6ca344e08afc1eac9bf94342a1536/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87be35672459794c1ea17df33b80b89619c6ca344e08afc1eac9bf94342a1536/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:33 np0005532761 podman[237131]: 2025-11-23 20:59:32.932266744 +0000 UTC m=+0.025173981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:33 np0005532761 podman[237131]: 2025-11-23 20:59:33.030337906 +0000 UTC m=+0.123245103 container init 3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:59:33 np0005532761 podman[237131]: 2025-11-23 20:59:33.041830453 +0000 UTC m=+0.134737640 container start 3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:59:33 np0005532761 podman[237131]: 2025-11-23 20:59:33.045892421 +0000 UTC m=+0.138799608 container attach 3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:59:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:59:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:59:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:33 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:59:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:59:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:59:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:59:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 15:59:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 15:59:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:33 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:33 np0005532761 lvm[237373]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:59:33 np0005532761 python3.9[237357]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931572.9806795-1610-213557820982594/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:33 np0005532761 lvm[237373]: VG ceph_vg0 finished
Nov 23 15:59:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:33.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:33 np0005532761 condescending_yonath[237172]: {}
Nov 23 15:59:33 np0005532761 systemd[1]: libpod-3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8.scope: Deactivated successfully.
Nov 23 15:59:33 np0005532761 systemd[1]: libpod-3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8.scope: Consumed 1.082s CPU time.
Nov 23 15:59:33 np0005532761 podman[237131]: 2025-11-23 20:59:33.757047478 +0000 UTC m=+0.849954665 container died 3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 15:59:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-87be35672459794c1ea17df33b80b89619c6ca344e08afc1eac9bf94342a1536-merged.mount: Deactivated successfully.
Nov 23 15:59:33 np0005532761 podman[237131]: 2025-11-23 20:59:33.797280669 +0000 UTC m=+0.890187856 container remove 3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:59:33 np0005532761 systemd[1]: libpod-conmon-3f4586d6decfd7bc7edc8449377b11be81d172ad38fef7282145e2d6591344b8.scope: Deactivated successfully.
Nov 23 15:59:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 15:59:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 15:59:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:33 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:34 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:34 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:34 np0005532761 python3.9[237489]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:59:34 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:34 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:34 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:34.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 15:59:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:35 np0005532761 python3.9[237601]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:59:35 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:35 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:35 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:35 np0005532761 systemd[1]: Starting multipathd container...
Nov 23 15:59:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:35 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2f6aa83c261d2b9bc8fa45b2a15278ab051320c33442be64c7089ce44139f9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:35 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2f6aa83c261d2b9bc8fa45b2a15278ab051320c33442be64c7089ce44139f9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:35.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:35 np0005532761 systemd[1]: Started /usr/bin/podman healthcheck run c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65.
Nov 23 15:59:35 np0005532761 podman[237641]: 2025-11-23 20:59:35.728789505 +0000 UTC m=+0.112286561 container init c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 15:59:35 np0005532761 multipathd[237656]: + sudo -E kolla_set_configs
Nov 23 15:59:35 np0005532761 podman[237641]: 2025-11-23 20:59:35.753385449 +0000 UTC m=+0.136882505 container start c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 15:59:35 np0005532761 podman[237641]: multipathd
Nov 23 15:59:35 np0005532761 systemd[1]: Started multipathd container.
Nov 23 15:59:35 np0005532761 multipathd[237656]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:59:35 np0005532761 multipathd[237656]: INFO:__main__:Validating config file
Nov 23 15:59:35 np0005532761 multipathd[237656]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:59:35 np0005532761 multipathd[237656]: INFO:__main__:Writing out command to execute
Nov 23 15:59:35 np0005532761 multipathd[237656]: ++ cat /run_command
Nov 23 15:59:35 np0005532761 multipathd[237656]: + CMD='/usr/sbin/multipathd -d'
Nov 23 15:59:35 np0005532761 multipathd[237656]: + ARGS=
Nov 23 15:59:35 np0005532761 multipathd[237656]: + sudo kolla_copy_cacerts
Nov 23 15:59:35 np0005532761 podman[237663]: 2025-11-23 20:59:35.829018804 +0000 UTC m=+0.066053840 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 23 15:59:35 np0005532761 multipathd[237656]: + [[ ! -n '' ]]
Nov 23 15:59:35 np0005532761 multipathd[237656]: + . kolla_extend_start
Nov 23 15:59:35 np0005532761 multipathd[237656]: Running command: '/usr/sbin/multipathd -d'
Nov 23 15:59:35 np0005532761 multipathd[237656]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 15:59:35 np0005532761 multipathd[237656]: + umask 0022
Nov 23 15:59:35 np0005532761 multipathd[237656]: + exec /usr/sbin/multipathd -d
Nov 23 15:59:35 np0005532761 systemd[1]: c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65-10855f74135b0e95.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 15:59:35 np0005532761 systemd[1]: c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65-10855f74135b0e95.service: Failed with result 'exit-code'.
Nov 23 15:59:35 np0005532761 multipathd[237656]: 3524.144346 | --------start up--------
Nov 23 15:59:35 np0005532761 multipathd[237656]: 3524.144363 | read /etc/multipath.conf
Nov 23 15:59:35 np0005532761 multipathd[237656]: 3524.150648 | path checkers start up
Nov 23 15:59:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:35 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 23 15:59:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:36.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:59:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:37.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:59:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:37.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:59:37 np0005532761 python3.9[237847]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:37 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:37 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:37.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:37] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 15:59:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:37] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 15:59:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:37 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:38 np0005532761 python3.9[238002]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:59:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:38.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 15:59:39 np0005532761 python3.9[238168]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:59:39 np0005532761 systemd[1]: Stopping multipathd container...
Nov 23 15:59:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:39 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:39 np0005532761 multipathd[237656]: 3527.499319 | exit (signal)
Nov 23 15:59:39 np0005532761 multipathd[237656]: 3527.499382 | --------shut down-------
Nov 23 15:59:39 np0005532761 systemd[1]: libpod-c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65.scope: Deactivated successfully.
Nov 23 15:59:39 np0005532761 podman[238173]: 2025-11-23 20:59:39.231691251 +0000 UTC m=+0.066632039 container died c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:59:39 np0005532761 systemd[1]: c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65-10855f74135b0e95.timer: Deactivated successfully.
Nov 23 15:59:39 np0005532761 systemd[1]: Stopped /usr/bin/podman healthcheck run c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65.
Nov 23 15:59:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65-userdata-shm.mount: Deactivated successfully.
Nov 23 15:59:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2d2f6aa83c261d2b9bc8fa45b2a15278ab051320c33442be64c7089ce44139f9-merged.mount: Deactivated successfully.
Nov 23 15:59:39 np0005532761 podman[238173]: 2025-11-23 20:59:39.410023979 +0000 UTC m=+0.244964777 container cleanup c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 15:59:39 np0005532761 podman[238173]: multipathd
Nov 23 15:59:39 np0005532761 podman[238202]: multipathd
Nov 23 15:59:39 np0005532761 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 23 15:59:39 np0005532761 systemd[1]: Stopped multipathd container.
Nov 23 15:59:39 np0005532761 systemd[1]: Starting multipathd container...
Nov 23 15:59:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 15:59:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2f6aa83c261d2b9bc8fa45b2a15278ab051320c33442be64c7089ce44139f9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2f6aa83c261d2b9bc8fa45b2a15278ab051320c33442be64c7089ce44139f9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:39 np0005532761 systemd[1]: Started /usr/bin/podman healthcheck run c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65.
Nov 23 15:59:39 np0005532761 podman[238215]: 2025-11-23 20:59:39.620335205 +0000 UTC m=+0.105217520 container init c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:59:39 np0005532761 multipathd[238231]: + sudo -E kolla_set_configs
Nov 23 15:59:39 np0005532761 podman[238215]: 2025-11-23 20:59:39.647726682 +0000 UTC m=+0.132608977 container start c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 15:59:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:39 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:39 np0005532761 podman[238215]: multipathd
Nov 23 15:59:39 np0005532761 multipathd[238231]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:59:39 np0005532761 multipathd[238231]: INFO:__main__:Validating config file
Nov 23 15:59:39 np0005532761 multipathd[238231]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:59:39 np0005532761 multipathd[238231]: INFO:__main__:Writing out command to execute
Nov 23 15:59:39 np0005532761 systemd[1]: Started multipathd container.
Nov 23 15:59:39 np0005532761 multipathd[238231]: ++ cat /run_command
Nov 23 15:59:39 np0005532761 multipathd[238231]: + CMD='/usr/sbin/multipathd -d'
Nov 23 15:59:39 np0005532761 multipathd[238231]: + ARGS=
Nov 23 15:59:39 np0005532761 multipathd[238231]: + sudo kolla_copy_cacerts
Nov 23 15:59:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:39.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:39 np0005532761 multipathd[238231]: + [[ ! -n '' ]]
Nov 23 15:59:39 np0005532761 multipathd[238231]: + . kolla_extend_start
Nov 23 15:59:39 np0005532761 multipathd[238231]: Running command: '/usr/sbin/multipathd -d'
Nov 23 15:59:39 np0005532761 multipathd[238231]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 15:59:39 np0005532761 multipathd[238231]: + umask 0022
Nov 23 15:59:39 np0005532761 multipathd[238231]: + exec /usr/sbin/multipathd -d
Nov 23 15:59:39 np0005532761 multipathd[238231]: 3528.033931 | --------start up--------
Nov 23 15:59:39 np0005532761 multipathd[238231]: 3528.033945 | read /etc/multipath.conf
Nov 23 15:59:39 np0005532761 multipathd[238231]: 3528.041116 | path checkers start up
Nov 23 15:59:39 np0005532761 podman[238238]: 2025-11-23 20:59:39.747080287 +0000 UTC m=+0.091739474 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 15:59:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:39 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:40 np0005532761 python3.9[238425]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:40.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:59:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:41 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:41 : epoch 6923756e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:59:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:41 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:41.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:41 np0005532761 python3.9[238604]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:59:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:41 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f80025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:42 np0005532761 python3.9[238757]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 23 15:59:42 np0005532761 kernel: Key type psk registered
Nov 23 15:59:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:42.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 15:59:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:43 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:43 np0005532761 python3.9[238921]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:43 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:43.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:43 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:44 np0005532761 python3.9[239044]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931583.1915188-1850-129113672275717/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:44 : epoch 6923756e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:59:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:44 : epoch 6923756e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:59:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:44.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:59:45 np0005532761 podman[239170]: 2025-11-23 20:59:45.130231416 +0000 UTC m=+0.078011338 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 15:59:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:45 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23e0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:45 np0005532761 python3.9[239215]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:45 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:45.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[226171]: 23/11/2025 20:59:45 : epoch 6923756e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc002f00 fd 39 proxy ignored for local
Nov 23 15:59:45 np0005532761 kernel: ganesha.nfsd[235736]: segfault at 50 ip 00007f24abf2c32e sp 00007f247affc210 error 4 in libntirpc.so.5.8[7f24abf11000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 23 15:59:45 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:59:45 np0005532761 systemd[1]: Started Process Core Dump (PID 239326/UID 0).
Nov 23 15:59:46 np0005532761 python3.9[239378]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:59:46 np0005532761 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 15:59:46 np0005532761 systemd[1]: Stopped Load Kernel Modules.
Nov 23 15:59:46 np0005532761 systemd[1]: Stopping Load Kernel Modules...
Nov 23 15:59:46 np0005532761 systemd[1]: Starting Load Kernel Modules...
Nov 23 15:59:46 np0005532761 systemd[1]: Finished Load Kernel Modules.
Nov 23 15:59:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:46.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:59:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:47.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 15:59:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:47.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:59:47 np0005532761 systemd-coredump[239338]: Process 226192 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 57:#012#0  0x00007f24abf2c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:59:47 np0005532761 systemd[1]: systemd-coredump@7-239326-0.service: Deactivated successfully.
Nov 23 15:59:47 np0005532761 systemd[1]: systemd-coredump@7-239326-0.service: Consumed 1.191s CPU time.
Nov 23 15:59:47 np0005532761 podman[239541]: 2025-11-23 20:59:47.241040178 +0000 UTC m=+0.025171679 container died c5a77f6afc5079c3074230d6969dd03013e06ebda29652d80d4b6a16895ed594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:59:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-197e0a4ea4fc929dfc2864a238e93b67bbd10d1f6d3e3912587de61fad6aae0a-merged.mount: Deactivated successfully.
Nov 23 15:59:47 np0005532761 podman[239541]: 2025-11-23 20:59:47.274542346 +0000 UTC m=+0.058673847 container remove c5a77f6afc5079c3074230d6969dd03013e06ebda29652d80d4b6a16895ed594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:59:47 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:59:47 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 15:59:47 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.627s CPU time.
Nov 23 15:59:47 np0005532761 python3.9[239537]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:59:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:47.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:47] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 15:59:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:47] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 15:59:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 15:59:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 15:59:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:48.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 15:59:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:49.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:49 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:49 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:49 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:50 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:50 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:50 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:50 np0005532761 systemd-logind[820]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 15:59:50 np0005532761 systemd-logind[820]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 15:59:50 np0005532761 lvm[239700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:59:50 np0005532761 lvm[239700]: VG ceph_vg0 finished
Nov 23 15:59:50 np0005532761 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:59:50 np0005532761 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:59:50 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:50 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:50.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:50 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 15:59:51 np0005532761 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:59:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:59:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:59:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:59:51.855 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:59:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:59:51.857 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:59:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 20:59:51.857 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:59:52 np0005532761 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:59:52 np0005532761 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:59:52 np0005532761 systemd[1]: man-db-cache-update.service: Consumed 1.516s CPU time.
Nov 23 15:59:52 np0005532761 systemd[1]: run-ra019c6b2c3484dd69e8b7b18716030e0.service: Deactivated successfully.
Nov 23 15:59:52 np0005532761 python3.9[241043]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:59:52 np0005532761 systemd[1]: Stopping Open-iSCSI...
Nov 23 15:59:52 np0005532761 iscsid[228424]: iscsid shutting down.
Nov 23 15:59:52 np0005532761 systemd[1]: iscsid.service: Deactivated successfully.
Nov 23 15:59:52 np0005532761 systemd[1]: Stopped Open-iSCSI.
Nov 23 15:59:52 np0005532761 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 23 15:59:52 np0005532761 systemd[1]: Starting Open-iSCSI...
Nov 23 15:59:52 np0005532761 systemd[1]: Started Open-iSCSI.
Nov 23 15:59:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205952 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:59:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:52.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:59:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/205953 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:59:53 np0005532761 python3.9[241199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:59:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:53.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:54 np0005532761 python3.9[241356]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 15:59:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:54.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:55.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:56 np0005532761 python3.9[241509]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:59:56 np0005532761 systemd[1]: Reloading.
Nov 23 15:59:56 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:56 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:59:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:56.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T20:59:57.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 15:59:57 np0005532761 podman[241669]: 2025-11-23 20:59:57.366968519 +0000 UTC m=+0.076545951 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 23 15:59:57 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 8.
Nov 23 15:59:57 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:59:57 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.627s CPU time.
Nov 23 15:59:57 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:59:57 np0005532761 python3.9[241703]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:59:57 np0005532761 network[241762]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:59:57 np0005532761 network[241766]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:59:57 np0005532761 network[241767]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:59:57 np0005532761 podman[241784]: 2025-11-23 20:59:57.701830558 +0000 UTC m=+0.040821213 container create fd41d73a51fb4c93c21e3c84e4921ababcc6fa54c4e23054064725ddfd1dbe38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:57] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:59:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:20:59:57] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 15:59:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a01159bce429f4e132e9e1304fdcff6cd124a9bf6f8f75f4ccfa94847000cc0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a01159bce429f4e132e9e1304fdcff6cd124a9bf6f8f75f4ccfa94847000cc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a01159bce429f4e132e9e1304fdcff6cd124a9bf6f8f75f4ccfa94847000cc0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a01159bce429f4e132e9e1304fdcff6cd124a9bf6f8f75f4ccfa94847000cc0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:57 np0005532761 podman[241784]: 2025-11-23 20:59:57.684139628 +0000 UTC m=+0.023130303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:57 np0005532761 podman[241784]: 2025-11-23 20:59:57.786574345 +0000 UTC m=+0.125565130 container init fd41d73a51fb4c93c21e3c84e4921ababcc6fa54c4e23054064725ddfd1dbe38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:59:57 np0005532761 podman[241784]: 2025-11-23 20:59:57.791145696 +0000 UTC m=+0.130136351 container start fd41d73a51fb4c93c21e3c84e4921ababcc6fa54c4e23054064725ddfd1dbe38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:59:57 np0005532761 bash[241784]: fd41d73a51fb4c93c21e3c84e4921ababcc6fa54c4e23054064725ddfd1dbe38
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:59:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 20:59:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:59:58 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:59:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 15:59:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:58.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 15:59:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:59.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 23 16:00:00 np0005532761 ceph-mon[74569]: overall HEALTH_OK
Nov 23 16:00:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Nov 23 16:00:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:00.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:01.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:00:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:00:03
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['volumes', '.mgr', '.nfs', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.log']
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:00:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:00:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:00:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:00:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:03.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:03 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:00:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:03 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:00:04 np0005532761 python3.9[242141]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:04 np0005532761 python3.9[242295]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:00:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:05 np0005532761 python3.9[242449]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:05.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:06 np0005532761 python3.9[242602]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:00:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:07.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:00:07 np0005532761 python3.9[242756]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:00:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:07] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:00:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:07.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:07 np0005532761 python3.9[242910]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:08 np0005532761 python3.9[243063]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:00:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:09 np0005532761 python3.9[243218]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:09.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 16:00:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:00:10 np0005532761 podman[243256]: 2025-11-23 21:00:10.541522957 +0000 UTC m=+0.058974185 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 16:00:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 16:00:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:11 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2544000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:11 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:11 np0005532761 python3.9[243409]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:11.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:11 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:12 np0005532761 python3.9[243561]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:12 np0005532761 python3.9[243714]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 16:00:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210013 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:00:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:13 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:13 np0005532761 python3.9[243867]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:13 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:13.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:13 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:14 np0005532761 python3.9[244019]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:00:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:15.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:15 np0005532761 python3.9[244172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:15 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:15 np0005532761 podman[244297]: 2025-11-23 21:00:15.507073785 +0000 UTC m=+0.084606375 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Nov 23 16:00:15 np0005532761 python3.9[244346]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:15 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:15.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:15 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:16 np0005532761 python3.9[244504]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:00:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:17.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:17.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:00:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:17 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:17 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:00:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:00:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:17.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:17 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:00:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:00:18 np0005532761 python3.9[244658]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:18 np0005532761 python3.9[244813]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:00:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:00:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:19.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:00:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:19 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:19 np0005532761 python3.9[244966]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:19 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:19.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:19 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:20 np0005532761 python3.9[245118]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:20 np0005532761 python3.9[245270]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:00:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:21.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:21 np0005532761 python3.9[245424]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:21 np0005532761 python3.9[245601]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:21.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:22 np0005532761 python3.9[245753]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:00:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:23.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:23 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:23 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:23.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:23 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:24 np0005532761 python3.9[245908]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:00:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:25.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:25 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:25 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:25 np0005532761 python3.9[246061]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 16:00:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:25.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:25 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:26 np0005532761 python3.9[246213]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 16:00:26 np0005532761 systemd[1]: Reloading.
Nov 23 16:00:26 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 16:00:26 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 16:00:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:27.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:27.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:00:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:27 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:27 np0005532761 podman[246280]: 2025-11-23 21:00:27.552236856 +0000 UTC m=+0.064642646 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:00:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:27 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 16:00:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Nov 23 16:00:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:27.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:27 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:27 np0005532761 python3.9[246421]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:28 np0005532761 python3.9[246574]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:29.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:29 np0005532761 python3.9[246729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:29 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:29 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:29.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:29 np0005532761 python3.9[246882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:29 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.191379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630191460, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 911, "num_deletes": 251, "total_data_size": 1512338, "memory_usage": 1533984, "flush_reason": "Manual Compaction"}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630208014, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1496311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19058, "largest_seqno": 19968, "table_properties": {"data_size": 1491820, "index_size": 2143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9799, "raw_average_key_size": 19, "raw_value_size": 1482875, "raw_average_value_size": 2953, "num_data_blocks": 96, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931548, "oldest_key_time": 1763931548, "file_creation_time": 1763931630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 16653 microseconds, and 5470 cpu microseconds.
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208064) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1496311 bytes OK
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208086) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.209418) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.209439) EVENT_LOG_v1 {"time_micros": 1763931630209433, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.209460) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1508074, prev total WAL file size 1508074, number of live WAL files 2.
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.210167) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1461KB)], [41(13MB)]
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630210229, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15560002, "oldest_snapshot_seqno": -1}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4978 keys, 13378579 bytes, temperature: kUnknown
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630334325, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13378579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13344216, "index_size": 20813, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 127057, "raw_average_key_size": 25, "raw_value_size": 13252750, "raw_average_value_size": 2662, "num_data_blocks": 854, "num_entries": 4978, "num_filter_entries": 4978, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763931630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.334831) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13378579 bytes
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.336171) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.1 rd, 107.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.4 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(19.3) write-amplify(8.9) OK, records in: 5494, records dropped: 516 output_compression: NoCompression
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.336215) EVENT_LOG_v1 {"time_micros": 1763931630336180, "job": 20, "event": "compaction_finished", "compaction_time_micros": 124363, "compaction_time_cpu_micros": 24529, "output_level": 6, "num_output_files": 1, "total_output_size": 13378579, "num_input_records": 5494, "num_output_records": 4978, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630337017, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630340482, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.210053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.340703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.340711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.340715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.340718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:00:30.340721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532761 python3.9[247035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 16:00:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:31 np0005532761 python3.9[247189]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:31 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:31 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:31 np0005532761 python3.9[247343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:31.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:31 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:32 np0005532761 python3.9[247496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:33.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:00:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:00:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:33 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:00:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:00:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:00:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:00:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:00:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:00:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:33 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:33 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:00:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:00:34 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 16:00:34 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:00:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:35.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:35 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:35 np0005532761 python3.9[247772]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:35 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:35.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:00:35 np0005532761 python3.9[247956]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:35 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:00:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:00:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.4 total, 600.0 interval#012Cumulative writes: 7984 writes, 31K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7984 writes, 1682 syncs, 4.75 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 809 writes, 1426 keys, 809 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 809 writes, 400 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:00:36 np0005532761 podman[248162]: 2025-11-23 21:00:36.32482661 +0000 UTC m=+0.041122791 container create dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pike, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:00:36 np0005532761 systemd[1]: Started libpod-conmon-dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f.scope.
Nov 23 16:00:36 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:00:36 np0005532761 podman[248162]: 2025-11-23 21:00:36.305370814 +0000 UTC m=+0.021667015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:00:36 np0005532761 podman[248162]: 2025-11-23 21:00:36.409208708 +0000 UTC m=+0.125504919 container init dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pike, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 16:00:36 np0005532761 podman[248162]: 2025-11-23 21:00:36.419637475 +0000 UTC m=+0.135933656 container start dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pike, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 23 16:00:36 np0005532761 podman[248162]: 2025-11-23 21:00:36.42288325 +0000 UTC m=+0.139179431 container attach dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pike, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:00:36 np0005532761 determined_pike[248214]: 167 167
Nov 23 16:00:36 np0005532761 systemd[1]: libpod-dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f.scope: Deactivated successfully.
Nov 23 16:00:36 np0005532761 podman[248162]: 2025-11-23 21:00:36.426683802 +0000 UTC m=+0.142980003 container died dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pike, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:00:36 np0005532761 systemd[1]: var-lib-containers-storage-overlay-335942e732d51bc85595e969e7c74607c6a591bf0207dd2894f855c44908b132-merged.mount: Deactivated successfully.
Nov 23 16:00:36 np0005532761 podman[248162]: 2025-11-23 21:00:36.480296613 +0000 UTC m=+0.196592794 container remove dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 16:00:36 np0005532761 systemd[1]: libpod-conmon-dd6e1534fa67ffc1ce096b9a84a56d541ce02352bda66597dee88ad5dbd62e7f.scope: Deactivated successfully.
Nov 23 16:00:36 np0005532761 python3.9[248216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:36 np0005532761 podman[248240]: 2025-11-23 21:00:36.647257 +0000 UTC m=+0.045197199 container create a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:00:36 np0005532761 systemd[1]: Started libpod-conmon-a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e.scope.
Nov 23 16:00:36 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:00:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3e8e3ca0a68ae4e5d25f81994261a7b472049d68241c84083f3f62bb520463/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3e8e3ca0a68ae4e5d25f81994261a7b472049d68241c84083f3f62bb520463/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3e8e3ca0a68ae4e5d25f81994261a7b472049d68241c84083f3f62bb520463/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3e8e3ca0a68ae4e5d25f81994261a7b472049d68241c84083f3f62bb520463/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3e8e3ca0a68ae4e5d25f81994261a7b472049d68241c84083f3f62bb520463/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:36 np0005532761 podman[248240]: 2025-11-23 21:00:36.628954554 +0000 UTC m=+0.026894763 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:00:36 np0005532761 podman[248240]: 2025-11-23 21:00:36.730732904 +0000 UTC m=+0.128673093 container init a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:00:36 np0005532761 podman[248240]: 2025-11-23 21:00:36.73887768 +0000 UTC m=+0.136817879 container start a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:00:36 np0005532761 podman[248240]: 2025-11-23 21:00:36.743350158 +0000 UTC m=+0.141290347 container attach a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_euclid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:00:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:37 np0005532761 serene_euclid[248280]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:00:37 np0005532761 serene_euclid[248280]: --> All data devices are unavailable
Nov 23 16:00:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:37.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:37 np0005532761 systemd[1]: libpod-a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e.scope: Deactivated successfully.
Nov 23 16:00:37 np0005532761 podman[248240]: 2025-11-23 21:00:37.056687387 +0000 UTC m=+0.454627576 container died a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_euclid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 16:00:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:37.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:00:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:37.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:00:37 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ab3e8e3ca0a68ae4e5d25f81994261a7b472049d68241c84083f3f62bb520463-merged.mount: Deactivated successfully.
Nov 23 16:00:37 np0005532761 podman[248240]: 2025-11-23 21:00:37.100318124 +0000 UTC m=+0.498258313 container remove a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_euclid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:00:37 np0005532761 systemd[1]: libpod-conmon-a2847f5678f08e9729c175284a40bab4859acc2a235b1eccb566f8f33f05635e.scope: Deactivated successfully.
Nov 23 16:00:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:37 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:37 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:00:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:00:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:37 np0005532761 podman[248531]: 2025-11-23 21:00:37.767557387 +0000 UTC m=+0.035634887 container create 32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yonath, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 16:00:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:37 np0005532761 systemd[1]: Started libpod-conmon-32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281.scope.
Nov 23 16:00:37 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:00:37 np0005532761 podman[248531]: 2025-11-23 21:00:37.84351456 +0000 UTC m=+0.111592080 container init 32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yonath, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:00:37 np0005532761 podman[248531]: 2025-11-23 21:00:37.752186569 +0000 UTC m=+0.020264089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:00:37 np0005532761 podman[248531]: 2025-11-23 21:00:37.849060678 +0000 UTC m=+0.117138178 container start 32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:00:37 np0005532761 gallant_yonath[248547]: 167 167
Nov 23 16:00:37 np0005532761 systemd[1]: libpod-32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281.scope: Deactivated successfully.
Nov 23 16:00:37 np0005532761 podman[248531]: 2025-11-23 21:00:37.853984198 +0000 UTC m=+0.122061708 container attach 32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yonath, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:00:37 np0005532761 podman[248531]: 2025-11-23 21:00:37.854512072 +0000 UTC m=+0.122589572 container died 32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:00:37 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ca4f6c658e82ea0398cc4b6f46a1d05dd39c06eaa7c4076b28504226490fd26d-merged.mount: Deactivated successfully.
Nov 23 16:00:37 np0005532761 python3.9[248530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:37 np0005532761 podman[248531]: 2025-11-23 21:00:37.901286522 +0000 UTC m=+0.169364022 container remove 32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 16:00:37 np0005532761 systemd[1]: libpod-conmon-32508f091f8d5ebcb791873f8caaad213e90b7ca62b15f0dae1932c68ab82281.scope: Deactivated successfully.
Nov 23 16:00:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:37 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:38 np0005532761 podman[248596]: 2025-11-23 21:00:38.075142972 +0000 UTC m=+0.046101684 container create 8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:00:38 np0005532761 systemd[1]: Started libpod-conmon-8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b.scope.
Nov 23 16:00:38 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:00:38 np0005532761 podman[248596]: 2025-11-23 21:00:38.056043106 +0000 UTC m=+0.027001848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:00:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b35abb5149535578d62cfdeecb007a3cf849f2cf1ae87e939245e6481a6a79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b35abb5149535578d62cfdeecb007a3cf849f2cf1ae87e939245e6481a6a79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b35abb5149535578d62cfdeecb007a3cf849f2cf1ae87e939245e6481a6a79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b35abb5149535578d62cfdeecb007a3cf849f2cf1ae87e939245e6481a6a79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:38 np0005532761 podman[248596]: 2025-11-23 21:00:38.161136723 +0000 UTC m=+0.132095455 container init 8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_solomon, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 16:00:38 np0005532761 podman[248596]: 2025-11-23 21:00:38.171471646 +0000 UTC m=+0.142430358 container start 8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_solomon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 16:00:38 np0005532761 podman[248596]: 2025-11-23 21:00:38.176781627 +0000 UTC m=+0.147740349 container attach 8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_solomon, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 16:00:38 np0005532761 epic_solomon[248663]: {
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:    "1": [
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:        {
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "devices": [
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "/dev/loop3"
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            ],
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "lv_name": "ceph_lv0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "lv_size": "21470642176",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "name": "ceph_lv0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "tags": {
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.cluster_name": "ceph",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.crush_device_class": "",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.encrypted": "0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.osd_id": "1",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.type": "block",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.vdo": "0",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:                "ceph.with_tpm": "0"
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            },
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "type": "block",
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:            "vg_name": "ceph_vg0"
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:        }
Nov 23 16:00:38 np0005532761 epic_solomon[248663]:    ]
Nov 23 16:00:38 np0005532761 epic_solomon[248663]: }
Nov 23 16:00:38 np0005532761 systemd[1]: libpod-8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b.scope: Deactivated successfully.
Nov 23 16:00:38 np0005532761 podman[248596]: 2025-11-23 21:00:38.537203374 +0000 UTC m=+0.508162116 container died 8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 16:00:38 np0005532761 python3.9[248744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:38 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f5b35abb5149535578d62cfdeecb007a3cf849f2cf1ae87e939245e6481a6a79-merged.mount: Deactivated successfully.
Nov 23 16:00:38 np0005532761 podman[248596]: 2025-11-23 21:00:38.588101974 +0000 UTC m=+0.559060686 container remove 8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:00:38 np0005532761 systemd[1]: libpod-conmon-8eece9ffc4d65d80a532f2aeb701173a7a26057af7e6bee5dde80b36de2e4f0b.scope: Deactivated successfully.
Nov 23 16:00:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:39.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:39 np0005532761 podman[249004]: 2025-11-23 21:00:39.185934066 +0000 UTC m=+0.045898368 container create b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:00:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:39 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:39 np0005532761 systemd[1]: Started libpod-conmon-b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a.scope.
Nov 23 16:00:39 np0005532761 python3.9[248986]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:00:39 np0005532761 podman[249004]: 2025-11-23 21:00:39.262398173 +0000 UTC m=+0.122362495 container init b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kapitsa, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:00:39 np0005532761 podman[249004]: 2025-11-23 21:00:39.168449552 +0000 UTC m=+0.028413884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:00:39 np0005532761 podman[249004]: 2025-11-23 21:00:39.274437183 +0000 UTC m=+0.134401485 container start b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:00:39 np0005532761 podman[249004]: 2025-11-23 21:00:39.278560822 +0000 UTC m=+0.138525144 container attach b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kapitsa, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:00:39 np0005532761 systemd[1]: libpod-b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a.scope: Deactivated successfully.
Nov 23 16:00:39 np0005532761 recursing_kapitsa[249020]: 167 167
Nov 23 16:00:39 np0005532761 conmon[249020]: conmon b2dc540d1f3c331ae481 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a.scope/container/memory.events
Nov 23 16:00:39 np0005532761 podman[249004]: 2025-11-23 21:00:39.284766776 +0000 UTC m=+0.144731078 container died b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 23 16:00:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ab6f4d39b3ceae6bdf316ca5308cf9aa6c91bd4f343e73f179793a3ef12dd47b-merged.mount: Deactivated successfully.
Nov 23 16:00:39 np0005532761 podman[249004]: 2025-11-23 21:00:39.325291241 +0000 UTC m=+0.185255543 container remove b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kapitsa, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:00:39 np0005532761 systemd[1]: libpod-conmon-b2dc540d1f3c331ae4815cafcd98852748b5cb61b528c5b227c5e4e5a5af5c6a.scope: Deactivated successfully.
Nov 23 16:00:39 np0005532761 podman[249093]: 2025-11-23 21:00:39.495390252 +0000 UTC m=+0.047248774 container create 4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:00:39 np0005532761 systemd[1]: Started libpod-conmon-4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061.scope.
Nov 23 16:00:39 np0005532761 podman[249093]: 2025-11-23 21:00:39.475235807 +0000 UTC m=+0.027094139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:00:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:00:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77bb0bff930ce05d9ba3d78757ec3c32dd9bfd4427de982da113a186e1c7ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77bb0bff930ce05d9ba3d78757ec3c32dd9bfd4427de982da113a186e1c7ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77bb0bff930ce05d9ba3d78757ec3c32dd9bfd4427de982da113a186e1c7ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77bb0bff930ce05d9ba3d78757ec3c32dd9bfd4427de982da113a186e1c7ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:00:39 np0005532761 podman[249093]: 2025-11-23 21:00:39.601274619 +0000 UTC m=+0.153132941 container init 4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:00:39 np0005532761 podman[249093]: 2025-11-23 21:00:39.608304215 +0000 UTC m=+0.160162517 container start 4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_thompson, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:00:39 np0005532761 podman[249093]: 2025-11-23 21:00:39.612015044 +0000 UTC m=+0.163873366 container attach 4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_thompson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:00:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:39 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:39.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:39 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:39 np0005532761 python3.9[249216]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:40 np0005532761 lvm[249362]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:00:40 np0005532761 lvm[249362]: VG ceph_vg0 finished
Nov 23 16:00:40 np0005532761 frosty_thompson[249159]: {}
Nov 23 16:00:40 np0005532761 systemd[1]: libpod-4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061.scope: Deactivated successfully.
Nov 23 16:00:40 np0005532761 podman[249093]: 2025-11-23 21:00:40.328937224 +0000 UTC m=+0.880795556 container died 4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_thompson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 16:00:40 np0005532761 systemd[1]: libpod-4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061.scope: Consumed 1.223s CPU time.
Nov 23 16:00:40 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ee77bb0bff930ce05d9ba3d78757ec3c32dd9bfd4427de982da113a186e1c7ef-merged.mount: Deactivated successfully.
Nov 23 16:00:40 np0005532761 podman[249093]: 2025-11-23 21:00:40.375314014 +0000 UTC m=+0.927172316 container remove 4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:00:40 np0005532761 systemd[1]: libpod-conmon-4d0d999b9ee38fbf91f21661aabc7d05b77f135798f609f5c5cb252e8a9a5061.scope: Deactivated successfully.
Nov 23 16:00:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:00:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:00:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:40 np0005532761 python3.9[249477]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:40 np0005532761 podman[249478]: 2025-11-23 21:00:40.772468055 +0000 UTC m=+0.059463538 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 16:00:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 16:00:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:41 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:41 np0005532761 python3.9[249647]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:41 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:41 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:41 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:41.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:41 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:42 np0005532761 python3.9[249824]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:43.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:43 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:43 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:43.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:43 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2514000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 16:00:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:45 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:45 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:45.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:45 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:46 np0005532761 podman[249854]: 2025-11-23 21:00:46.581081758 +0000 UTC m=+0.099022016 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:00:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:47.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:47.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:00:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:47 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:00:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:00:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:47 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:47.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:47 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:00:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:00:48 np0005532761 python3.9[250009]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 23 16:00:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:49.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:49 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:49 np0005532761 python3.9[250166]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 16:00:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:49 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:49.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:49 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:50 np0005532761 python3.9[250324]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 16:00:50 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:00:50 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:00:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 16:00:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:51.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:51 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:51 np0005532761 systemd-logind[820]: New session 55 of user zuul.
Nov 23 16:00:51 np0005532761 systemd[1]: Started Session 55 of User zuul.
Nov 23 16:00:51 np0005532761 ceph-osd[83114]: bluestore.MempoolThread fragmentation_score=0.000028 took=0.000040s
Nov 23 16:00:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:51 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:51.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:00:51.857 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:00:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:00:51.858 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:00:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:00:51.858 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:00:51 np0005532761 systemd[1]: session-55.scope: Deactivated successfully.
Nov 23 16:00:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:51 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:51 np0005532761 systemd-logind[820]: Session 55 logged out. Waiting for processes to exit.
Nov 23 16:00:51 np0005532761 systemd-logind[820]: Removed session 55.
Nov 23 16:00:52 np0005532761 python3.9[250513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:53.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:53 np0005532761 python3.9[250636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931652.1081097-3433-114340475999643/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:53 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:53 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:53.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:53 np0005532761 python3.9[250786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:53 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210053 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:00:54 np0005532761 python3.9[250862]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:54 np0005532761 python3.9[251013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 16:00:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:55.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:55 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:55 np0005532761 python3.9[251135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931654.364778-3433-62394958303974/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:55 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:55 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:55 np0005532761 python3.9[251285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:56 np0005532761 python3.9[251406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931655.4648113-3433-194647180268796/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:00:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:57.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:00:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:57.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:00:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:00:57.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:00:57 np0005532761 python3.9[251557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:57 np0005532761 python3.9[251679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931656.6582692-3433-184244683108756/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:57] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 16:00:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:00:57] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 16:00:57 np0005532761 podman[251680]: 2025-11-23 21:00:57.741479699 +0000 UTC m=+0.083344390 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:00:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:57.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:58 np0005532761 python3.9[251846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:58 np0005532761 python3.9[251968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931657.7960153-3433-208701745854060/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:00:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:59.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:59 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:59 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:00:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:59.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:00:59 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 16:01:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:01 np0005532761 python3.9[252122]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:01 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:01 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:01.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:01 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:02 np0005532761 python3.9[252300]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:02 np0005532761 python3.9[252468]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:01:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:03.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:01:03
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', '.mgr', '.nfs', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images']
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:01:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:03 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:01:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:01:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:01:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:03 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2514003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:01:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:01:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:03 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:03.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:03 np0005532761 python3.9[252623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:01:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:03 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:04 np0005532761 python3.9[252746]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763931663.4178073-3754-160502441111885/.source _original_basename=.zszusuq1 follow=False checksum=3fd2874f49f7f0fb6cb3d75a0209a5a8aa4fd1ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 23 16:01:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 16:01:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:05.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:05 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:05 np0005532761 python3.9[252900]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:05 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2514003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:05 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:06 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:01:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:06 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:01:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:06 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:01:06 np0005532761 python3.9[253052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:01:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 16:01:07 np0005532761 python3.9[253174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931666.1406329-3832-1114924743029/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:01:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:07.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:07.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:01:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:07 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:07] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 23 16:01:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:07] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 23 16:01:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:07 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:07.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:07 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25380021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:08 np0005532761 python3.9[253325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:01:08 np0005532761 python3.9[253446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931667.6139426-3877-30265314089483/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:01:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 16:01:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:09.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:01:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:09.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:09 np0005532761 python3.9[253601]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 23 16:01:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:09 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:10 np0005532761 podman[253754]: 2025-11-23 21:01:10.914590707 +0000 UTC m=+0.068694532 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 23 16:01:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:01:11 np0005532761 python3.9[253755]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 16:01:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:11.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:11 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140038a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:11 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:11.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:11 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:12 np0005532761 python3[253927]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 16:01:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:01:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:13.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:13 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:13 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25140038a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:13.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:13 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 16:01:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:15.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:15 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:15 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:15.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:15 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210115 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:01:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 16:01:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:17.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:01:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:17.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:17 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:17] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 23 16:01:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:17] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 23 16:01:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:17 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:17.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:17 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:01:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:01:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 16:01:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:19.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:19 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:19 np0005532761 podman[253988]: 2025-11-23 21:01:19.330726819 +0000 UTC m=+1.845201167 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 16:01:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:19 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:19.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:19 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 16:01:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:21.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:01:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:23 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:23 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:23.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:23 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c002f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:01:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:25 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:25 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:25.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:25 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:27.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:01:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:27.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:27 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c002f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:27] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 23 16:01:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:27] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 23 16:01:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:27 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25240045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:27.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:27 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:29.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:29 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:29 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c002f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:29.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:29 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25240045c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:31.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:31 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:31 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:31.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:31 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:32 np0005532761 podman[254076]: 2025-11-23 21:01:32.400051959 +0000 UTC m=+3.924498043 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 16:01:32 np0005532761 podman[253942]: 2025-11-23 21:01:32.634526896 +0000 UTC m=+20.389274927 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 16:01:32 np0005532761 podman[254120]: 2025-11-23 21:01:32.745696644 +0000 UTC m=+0.020729291 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 16:01:32 np0005532761 podman[254120]: 2025-11-23 21:01:32.840962 +0000 UTC m=+0.115994607 container create ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:01:32 np0005532761 python3[253927]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 23 16:01:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:33.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:01:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:01:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:01:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:01:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:01:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:01:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:01:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:01:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:33 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25240045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:33 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:33.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:33 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 16:01:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:35 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:35 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:35.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:35 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:37.087Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:01:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:37.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:01:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:37 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:37 np0005532761 python3.9[254316]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:37] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 16:01:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:37] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 16:01:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:37 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:37.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:37 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210138 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:01:38 np0005532761 python3.9[254471]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 23 16:01:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:39 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:39 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:39 np0005532761 python3.9[254624]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 16:01:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:39 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:40 np0005532761 python3[254777]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 16:01:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:41 np0005532761 podman[254879]: 2025-11-23 21:01:41.121888677 +0000 UTC m=+0.063166296 container create 9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 23 16:01:41 np0005532761 podman[254879]: 2025-11-23 21:01:41.085553794 +0000 UTC m=+0.026831493 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 16:01:41 np0005532761 python3[254777]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 23 16:01:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:41 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:01:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:01:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:41 np0005532761 podman[254961]: 2025-11-23 21:01:41.540642791 +0000 UTC m=+0.060894045 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 16:01:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:41 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:41.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:41 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:01:42 np0005532761 python3.9[255133]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:01:42 np0005532761 podman[255303]: 2025-11-23 21:01:42.767952915 +0000 UTC m=+0.049695379 container create 63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:01:42 np0005532761 systemd[1]: Started libpod-conmon-63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc.scope.
Nov 23 16:01:42 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:42 np0005532761 podman[255303]: 2025-11-23 21:01:42.745355686 +0000 UTC m=+0.027098190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:01:42 np0005532761 podman[255303]: 2025-11-23 21:01:42.858543276 +0000 UTC m=+0.140285760 container init 63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 16:01:42 np0005532761 podman[255303]: 2025-11-23 21:01:42.86545034 +0000 UTC m=+0.147192804 container start 63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gagarin, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:01:42 np0005532761 podman[255303]: 2025-11-23 21:01:42.870086003 +0000 UTC m=+0.151828477 container attach 63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 16:01:42 np0005532761 xenodochial_gagarin[255363]: 167 167
Nov 23 16:01:42 np0005532761 systemd[1]: libpod-63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc.scope: Deactivated successfully.
Nov 23 16:01:42 np0005532761 podman[255303]: 2025-11-23 21:01:42.873334099 +0000 UTC m=+0.155076563 container died 63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gagarin, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:01:42 np0005532761 systemd[1]: var-lib-containers-storage-overlay-cc2a4888839241e6a1b0d9d0cf89073b05bf5c62ac8cf11d6679d7f553ee22ad-merged.mount: Deactivated successfully.
Nov 23 16:01:42 np0005532761 podman[255303]: 2025-11-23 21:01:42.921530527 +0000 UTC m=+0.203272991 container remove 63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:01:42 np0005532761 systemd[1]: libpod-conmon-63bedbcb11fd4b39278baa7d231c0217444c044a92342eee9d00a22c4d3cdcfc.scope: Deactivated successfully.
Nov 23 16:01:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:01:43 np0005532761 podman[255419]: 2025-11-23 21:01:43.092266414 +0000 UTC m=+0.056227672 container create fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:01:43 np0005532761 python3.9[255408]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:43.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:43 np0005532761 systemd[1]: Started libpod-conmon-fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a.scope.
Nov 23 16:01:43 np0005532761 podman[255419]: 2025-11-23 21:01:43.061394115 +0000 UTC m=+0.025355373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:01:43 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebdaccdf5761e740d72af9b62f596624c4ebbe04456a69c743399f22f6b4b706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebdaccdf5761e740d72af9b62f596624c4ebbe04456a69c743399f22f6b4b706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebdaccdf5761e740d72af9b62f596624c4ebbe04456a69c743399f22f6b4b706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebdaccdf5761e740d72af9b62f596624c4ebbe04456a69c743399f22f6b4b706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebdaccdf5761e740d72af9b62f596624c4ebbe04456a69c743399f22f6b4b706/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:43 np0005532761 podman[255419]: 2025-11-23 21:01:43.19734527 +0000 UTC m=+0.161306558 container init fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 16:01:43 np0005532761 podman[255419]: 2025-11-23 21:01:43.204173491 +0000 UTC m=+0.168134749 container start fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:01:43 np0005532761 podman[255419]: 2025-11-23 21:01:43.208212728 +0000 UTC m=+0.172173986 container attach fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:01:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:43 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:43 np0005532761 competent_chaum[255435]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:01:43 np0005532761 competent_chaum[255435]: --> All data devices are unavailable
Nov 23 16:01:43 np0005532761 systemd[1]: libpod-fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a.scope: Deactivated successfully.
Nov 23 16:01:43 np0005532761 podman[255419]: 2025-11-23 21:01:43.530458693 +0000 UTC m=+0.494419971 container died fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 16:01:43 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ebdaccdf5761e740d72af9b62f596624c4ebbe04456a69c743399f22f6b4b706-merged.mount: Deactivated successfully.
Nov 23 16:01:43 np0005532761 podman[255419]: 2025-11-23 21:01:43.577543492 +0000 UTC m=+0.541504750 container remove fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:01:43 np0005532761 systemd[1]: libpod-conmon-fd4b88dd5c99d363fd1431e3ce7c89cca92e73ffec1ad2e18f17508ee230fc3a.scope: Deactivated successfully.
Nov 23 16:01:43 np0005532761 python3.9[255610]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931703.1982448-4153-173483020873684/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:43 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:43.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:43 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:44 np0005532761 podman[255781]: 2025-11-23 21:01:44.179780652 +0000 UTC m=+0.093874571 container create 72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 16:01:44 np0005532761 podman[255781]: 2025-11-23 21:01:44.1080702 +0000 UTC m=+0.022164139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:01:44 np0005532761 systemd[1]: Started libpod-conmon-72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66.scope.
Nov 23 16:01:44 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:44 np0005532761 python3.9[255777]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 16:01:44 np0005532761 systemd[1]: Reloading.
Nov 23 16:01:44 np0005532761 podman[255781]: 2025-11-23 21:01:44.428113686 +0000 UTC m=+0.342207645 container init 72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Nov 23 16:01:44 np0005532761 podman[255781]: 2025-11-23 21:01:44.436277032 +0000 UTC m=+0.350370961 container start 72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 23 16:01:44 np0005532761 youthful_kapitsa[255797]: 167 167
Nov 23 16:01:44 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 16:01:44 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 16:01:44 np0005532761 podman[255781]: 2025-11-23 21:01:44.524669426 +0000 UTC m=+0.438763355 container attach 72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:01:44 np0005532761 podman[255781]: 2025-11-23 21:01:44.526595908 +0000 UTC m=+0.440689837 container died 72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 16:01:44 np0005532761 systemd[1]: libpod-72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66.scope: Deactivated successfully.
Nov 23 16:01:44 np0005532761 systemd[1]: var-lib-containers-storage-overlay-7a5eccd20e139aca9cd1c76e2967b81fb6a58b3655d95df0dd93795d86c6b4ff-merged.mount: Deactivated successfully.
Nov 23 16:01:44 np0005532761 podman[255781]: 2025-11-23 21:01:44.768996566 +0000 UTC m=+0.683090495 container remove 72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:01:44 np0005532761 systemd[1]: libpod-conmon-72d089f40efbeb545207d50b1ad85cf54ad8fb7767066e8b6a97744e3dd9cf66.scope: Deactivated successfully.
Nov 23 16:01:44 np0005532761 podman[255868]: 2025-11-23 21:01:44.958433558 +0000 UTC m=+0.053317764 container create d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_merkle, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:01:45 np0005532761 systemd[1]: Started libpod-conmon-d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e.scope.
Nov 23 16:01:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 16:01:45 np0005532761 podman[255868]: 2025-11-23 21:01:44.931979847 +0000 UTC m=+0.026864053 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:01:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b25f62b9b061fc8896ba412cafe3f493bf6f796bbbce256d79bf471f900419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b25f62b9b061fc8896ba412cafe3f493bf6f796bbbce256d79bf471f900419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b25f62b9b061fc8896ba412cafe3f493bf6f796bbbce256d79bf471f900419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b25f62b9b061fc8896ba412cafe3f493bf6f796bbbce256d79bf471f900419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532761 podman[255868]: 2025-11-23 21:01:45.086696129 +0000 UTC m=+0.181580345 container init d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_merkle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:01:45 np0005532761 podman[255868]: 2025-11-23 21:01:45.101820891 +0000 UTC m=+0.196705087 container start d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_merkle, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:01:45 np0005532761 podman[255868]: 2025-11-23 21:01:45.111096276 +0000 UTC m=+0.205980502 container attach d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_merkle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 16:01:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:45.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:45 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]: {
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:    "1": [
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:        {
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "devices": [
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "/dev/loop3"
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            ],
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "lv_name": "ceph_lv0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "lv_size": "21470642176",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "name": "ceph_lv0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "tags": {
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.cluster_name": "ceph",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.crush_device_class": "",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.encrypted": "0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.osd_id": "1",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.type": "block",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.vdo": "0",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:                "ceph.with_tpm": "0"
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            },
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "type": "block",
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:            "vg_name": "ceph_vg0"
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:        }
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]:    ]
Nov 23 16:01:45 np0005532761 gifted_merkle[255922]: }
Nov 23 16:01:45 np0005532761 python3.9[255955]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:01:45 np0005532761 systemd[1]: libpod-d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e.scope: Deactivated successfully.
Nov 23 16:01:45 np0005532761 podman[255868]: 2025-11-23 21:01:45.411872231 +0000 UTC m=+0.506756427 container died d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 16:01:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-68b25f62b9b061fc8896ba412cafe3f493bf6f796bbbce256d79bf471f900419-merged.mount: Deactivated successfully.
Nov 23 16:01:45 np0005532761 podman[255868]: 2025-11-23 21:01:45.455695173 +0000 UTC m=+0.550579369 container remove d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:01:45 np0005532761 systemd[1]: libpod-conmon-d348d5219d74ec8974f0ef2a44089b3db81b41d9e4254ed4bcd456b76e09da6e.scope: Deactivated successfully.
Nov 23 16:01:45 np0005532761 systemd[1]: Reloading.
Nov 23 16:01:45 np0005532761 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 16:01:45 np0005532761 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 16:01:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:45 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:45 np0005532761 systemd[1]: Starting nova_compute container...
Nov 23 16:01:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:46 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524004680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 podman[256038]: 2025-11-23 21:01:46.070131336 +0000 UTC m=+0.223913538 container init 9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 16:01:46 np0005532761 podman[256038]: 2025-11-23 21:01:46.076624019 +0000 UTC m=+0.230406211 container start 9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 16:01:46 np0005532761 podman[256038]: nova_compute
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + sudo -E kolla_set_configs
Nov 23 16:01:46 np0005532761 systemd[1]: Started nova_compute container.
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Validating config file
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying service configuration files
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Deleting /etc/ceph
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Creating directory /etc/ceph
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Writing out command to execute
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:46 np0005532761 nova_compute[256079]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:46 np0005532761 nova_compute[256079]: ++ cat /run_command
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + CMD=nova-compute
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + ARGS=
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + sudo kolla_copy_cacerts
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + [[ ! -n '' ]]
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + . kolla_extend_start
Nov 23 16:01:46 np0005532761 nova_compute[256079]: Running command: 'nova-compute'
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + umask 0022
Nov 23 16:01:46 np0005532761 nova_compute[256079]: + exec nova-compute
Nov 23 16:01:46 np0005532761 podman[256155]: 2025-11-23 21:01:46.287940402 +0000 UTC m=+0.035853512 container create 60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 16:01:46 np0005532761 systemd[1]: Started libpod-conmon-60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d.scope.
Nov 23 16:01:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:46 np0005532761 podman[256155]: 2025-11-23 21:01:46.272859842 +0000 UTC m=+0.020772982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:01:46 np0005532761 podman[256155]: 2025-11-23 21:01:46.376912751 +0000 UTC m=+0.124825901 container init 60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 16:01:46 np0005532761 podman[256155]: 2025-11-23 21:01:46.385671262 +0000 UTC m=+0.133584372 container start 60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_wilson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 16:01:46 np0005532761 podman[256155]: 2025-11-23 21:01:46.388831777 +0000 UTC m=+0.136744937 container attach 60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_wilson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:01:46 np0005532761 jovial_wilson[256171]: 167 167
Nov 23 16:01:46 np0005532761 systemd[1]: libpod-60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d.scope: Deactivated successfully.
Nov 23 16:01:46 np0005532761 conmon[256171]: conmon 60ec09ea140303e0ce6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d.scope/container/memory.events
Nov 23 16:01:46 np0005532761 podman[256155]: 2025-11-23 21:01:46.396172021 +0000 UTC m=+0.144085171 container died 60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 16:01:46 np0005532761 systemd[1]: var-lib-containers-storage-overlay-032368edf1865b656a9d85ed8e1bdbe4f6990c237661acae86d8231ded5a5be5-merged.mount: Deactivated successfully.
Nov 23 16:01:46 np0005532761 podman[256155]: 2025-11-23 21:01:46.436067589 +0000 UTC m=+0.183980709 container remove 60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 16:01:46 np0005532761 systemd[1]: libpod-conmon-60ec09ea140303e0ce6edf7341ad0a740844e95d619c2e7044bcd1c9ae6fb30d.scope: Deactivated successfully.
Nov 23 16:01:46 np0005532761 podman[256195]: 2025-11-23 21:01:46.593149045 +0000 UTC m=+0.041462241 container create fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 16:01:46 np0005532761 systemd[1]: Started libpod-conmon-fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb.scope.
Nov 23 16:01:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d795c45d944a579c45067404f26d467a3a6bcd34e266efcd32eb8470d9f8eeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 podman[256195]: 2025-11-23 21:01:46.573893474 +0000 UTC m=+0.022206700 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d795c45d944a579c45067404f26d467a3a6bcd34e266efcd32eb8470d9f8eeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d795c45d944a579c45067404f26d467a3a6bcd34e266efcd32eb8470d9f8eeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d795c45d944a579c45067404f26d467a3a6bcd34e266efcd32eb8470d9f8eeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:46 np0005532761 podman[256195]: 2025-11-23 21:01:46.691473822 +0000 UTC m=+0.139787038 container init fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_sammet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:01:46 np0005532761 podman[256195]: 2025-11-23 21:01:46.700002378 +0000 UTC m=+0.148315574 container start fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:01:46 np0005532761 podman[256195]: 2025-11-23 21:01:46.703626164 +0000 UTC m=+0.151939390 container attach fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 16:01:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:01:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:47.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:01:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:47.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:01:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:47.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:47 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538001f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:47 np0005532761 lvm[256361]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:01:47 np0005532761 lvm[256361]: VG ceph_vg0 finished
Nov 23 16:01:47 np0005532761 lucid_sammet[256211]: {}
Nov 23 16:01:47 np0005532761 systemd[1]: libpod-fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb.scope: Deactivated successfully.
Nov 23 16:01:47 np0005532761 systemd[1]: libpod-fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb.scope: Consumed 1.076s CPU time.
Nov 23 16:01:47 np0005532761 podman[256195]: 2025-11-23 21:01:47.399959909 +0000 UTC m=+0.848273135 container died fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 16:01:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1d795c45d944a579c45067404f26d467a3a6bcd34e266efcd32eb8470d9f8eeb-merged.mount: Deactivated successfully.
Nov 23 16:01:47 np0005532761 podman[256195]: 2025-11-23 21:01:47.454557956 +0000 UTC m=+0.902871152 container remove fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:01:47 np0005532761 systemd[1]: libpod-conmon-fcf01d3737a1e5727e8193c7fba39f720739bef50a17fb79fb35aa5d6f136fdb.scope: Deactivated successfully.
Nov 23 16:01:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:01:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:01:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:47 np0005532761 python3.9[256428]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:47] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 16:01:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:47] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Nov 23 16:01:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:47 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538001f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:48 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2514001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:01:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:01:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:48 np0005532761 python3.9[256603]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:48 np0005532761 nova_compute[256079]: 2025-11-23 21:01:48.670 256090 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:48 np0005532761 nova_compute[256079]: 2025-11-23 21:01:48.671 256090 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:48 np0005532761 nova_compute[256079]: 2025-11-23 21:01:48.671 256090 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:48 np0005532761 nova_compute[256079]: 2025-11-23 21:01:48.671 256090 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 23 16:01:48 np0005532761 nova_compute[256079]: 2025-11-23 21:01:48.823 256090 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:01:48 np0005532761 nova_compute[256079]: 2025-11-23 21:01:48.851 256090 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:01:48 np0005532761 nova_compute[256079]: 2025-11-23 21:01:48.852 256090 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 23 16:01:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:01:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:49.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:49 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.307 256090 INFO nova.virt.driver [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.580 256090 INFO nova.compute.provider_config [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.608 256090 DEBUG oslo_concurrency.lockutils [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.608 256090 DEBUG oslo_concurrency.lockutils [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.609 256090 DEBUG oslo_concurrency.lockutils [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.609 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.609 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.610 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.610 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.610 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.610 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.610 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.610 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.610 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.611 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.611 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.611 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.611 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.611 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.611 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.611 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.612 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.612 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.612 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.612 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.612 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.612 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.612 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.613 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.613 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.613 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.613 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.613 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.613 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.614 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.614 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.614 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.614 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.614 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.614 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.614 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.615 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.615 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.615 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.615 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.615 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.615 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.616 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.616 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.616 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.616 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.616 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.616 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.616 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.617 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.617 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.617 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.617 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.617 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.617 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.617 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.618 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.618 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.618 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.618 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.618 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.618 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.618 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.619 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.619 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.619 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.619 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.619 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.619 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.620 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.620 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.620 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.620 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.620 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.620 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.621 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.621 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.621 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.621 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 python3.9[256759]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.621 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.621 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.621 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.622 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.622 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.622 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.622 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.622 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.622 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.622 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.623 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.623 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.623 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.623 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.623 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.623 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.624 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.624 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.624 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.624 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.624 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.624 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.625 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.625 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.625 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.625 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.625 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.626 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.626 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.626 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.626 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.626 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.626 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.626 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.627 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.627 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.627 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.627 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.627 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.627 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.627 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.628 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.629 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.629 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.629 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.629 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.629 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.629 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.630 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.630 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.630 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.630 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.630 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.630 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.630 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.631 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.631 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.631 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.631 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.631 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.631 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.631 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.632 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.632 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.632 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.632 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.632 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.632 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.633 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.633 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.633 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.633 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.633 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.633 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.634 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.634 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.634 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.634 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.634 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.634 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.635 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.635 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.635 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.635 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.635 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.635 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.636 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.636 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.636 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.636 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.636 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.636 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.637 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.637 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.637 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.637 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.637 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.637 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.638 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.638 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.638 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.638 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.638 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.638 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.638 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.639 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.639 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.639 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.639 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.639 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.639 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.639 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.640 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.640 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.640 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.640 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.640 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.640 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.640 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.641 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.641 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.641 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.641 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.641 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.642 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.642 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.642 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.642 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.642 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.642 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.643 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.644 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.644 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.644 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.644 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.644 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.644 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.644 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.645 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.645 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.645 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.645 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.645 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.645 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.646 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.646 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.646 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.646 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.646 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.647 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.647 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.647 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.647 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.647 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.647 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.647 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.648 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.648 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.648 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.648 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.648 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.648 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.648 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.649 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.649 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.649 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.649 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.649 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.649 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.649 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.650 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.650 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.650 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.650 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.650 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.650 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.650 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.651 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.652 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.652 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.652 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.652 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.652 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.652 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.652 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.653 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.653 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.653 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.653 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.653 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.653 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.653 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.654 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.654 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.654 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.654 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.654 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.654 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.654 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.655 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.655 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.655 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.655 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.655 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.655 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.655 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.656 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.656 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.656 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.656 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.656 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.656 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.656 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.657 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.657 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.657 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.657 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.657 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.657 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.657 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.658 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.658 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.658 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.658 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.658 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.658 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.659 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.659 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.659 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.659 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.659 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.659 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.659 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.660 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.660 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.660 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.660 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.660 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.660 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.660 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.661 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.662 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.662 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.662 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.662 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.662 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.662 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.662 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.663 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.663 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.663 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.663 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.663 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.664 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.665 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.665 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.665 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.665 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.665 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.665 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.665 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.666 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.666 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.666 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.666 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.666 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.666 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.666 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.667 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.667 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.667 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.667 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.667 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.667 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.667 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.668 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.668 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.668 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.668 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.668 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.668 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.669 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.669 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.669 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.669 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.669 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.669 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.670 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.670 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.670 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.670 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.670 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.670 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.670 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.671 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.671 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.671 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.671 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.671 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.671 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.671 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.672 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.672 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.672 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.672 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.672 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.672 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.672 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.673 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.673 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.673 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.673 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.673 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.673 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.674 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.674 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.674 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.674 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.674 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.674 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.675 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.676 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.676 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.676 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.676 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.676 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.676 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.676 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.677 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.677 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.677 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.677 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.677 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.677 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.677 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.678 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.678 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.678 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.678 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.678 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.678 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.679 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.679 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.679 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.679 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.679 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.679 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.680 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.680 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.680 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.680 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.680 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.680 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.681 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.681 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.681 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.681 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.681 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.681 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.681 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.682 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.682 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.682 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.682 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.682 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.682 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.683 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.683 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.683 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.683 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.683 256090 WARNING oslo_config.cfg [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 16:01:49 np0005532761 nova_compute[256079]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 16:01:49 np0005532761 nova_compute[256079]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 16:01:49 np0005532761 nova_compute[256079]: and ``live_migration_inbound_addr`` respectively.
Nov 23 16:01:49 np0005532761 nova_compute[256079]: ).  Its value may be silently ignored in the future.#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.683 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.684 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.684 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.684 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.684 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.684 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.684 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.685 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.685 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.685 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.685 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.685 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.685 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.686 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.686 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.686 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.686 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.686 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.686 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rbd_secret_uuid        = 03808be8-ae4a-5548-82e6-4a294f1bc627 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.687 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.687 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.687 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.687 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.687 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.687 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.687 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.688 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.688 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.688 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.688 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.688 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.688 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.689 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.689 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.689 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.689 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.689 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.689 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.690 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.690 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.690 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.690 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.690 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.690 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.690 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.691 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.691 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.691 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.691 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.691 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.691 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.692 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.692 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.692 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.692 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.692 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.692 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.692 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.693 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.693 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.693 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.693 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.693 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.693 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.693 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.694 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.694 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.694 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.694 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.694 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.694 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.695 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.695 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.695 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.695 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.695 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.695 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.695 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.696 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.696 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.696 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.696 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.696 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.696 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.697 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.697 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.697 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.697 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.697 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.697 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.697 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.698 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.698 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.698 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.698 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.698 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.699 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.699 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.699 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.699 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.699 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.699 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.699 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.700 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.700 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.700 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.700 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.700 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.700 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.700 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.701 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.701 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.701 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.701 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.701 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.701 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.701 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.702 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.702 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.702 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.702 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.702 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.702 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.703 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.703 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.703 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.703 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.703 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.703 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.704 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.704 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.704 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.704 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.704 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.704 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.705 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.705 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.705 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.705 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.705 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.706 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.706 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.706 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.706 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.706 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.707 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.707 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.707 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.707 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.707 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.708 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.708 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.708 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.708 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.708 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.709 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.709 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.709 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.709 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.709 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.709 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.710 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.710 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.710 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.710 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.710 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.710 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.710 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.711 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.711 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.711 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.711 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.711 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.711 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.712 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.712 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.712 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.712 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.712 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.712 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.713 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.713 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.713 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.713 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.713 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.713 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.713 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.714 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.714 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.714 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.714 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.714 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.714 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.715 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.715 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.715 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.715 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.715 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.715 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.716 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.716 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.716 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.716 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.716 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.716 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.717 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.717 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.717 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.717 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.717 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.718 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.718 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.718 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.718 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.719 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.719 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.719 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.719 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.719 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.720 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.720 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.720 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.720 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.721 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.721 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.721 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.721 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.722 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.722 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.723 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.723 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.724 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.724 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.724 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.724 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.725 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.725 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.725 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.725 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.725 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.726 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.726 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.726 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.726 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.727 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.727 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.727 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.727 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.728 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.728 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.728 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.729 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.729 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.729 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.729 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.730 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.730 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.730 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.730 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.731 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.731 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.731 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.731 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.731 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.732 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.732 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.732 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.732 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.733 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.733 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.733 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.733 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.733 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.734 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.734 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.734 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.734 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.735 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.735 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.735 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.735 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.736 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.736 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.736 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.736 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.737 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.737 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.737 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.737 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.738 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.738 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.738 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.738 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.739 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.739 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.739 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.739 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.740 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.740 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.740 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.740 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.740 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.741 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.741 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.741 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.742 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.742 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.742 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.742 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.742 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.743 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.743 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.743 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.743 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.744 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.744 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.744 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.744 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.745 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.745 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.745 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.745 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.745 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.746 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.746 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.746 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.746 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.747 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.747 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.747 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.747 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.748 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.748 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.748 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.748 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.748 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.749 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.749 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.749 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.749 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.750 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.750 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.750 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.750 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.751 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.751 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.751 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.751 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.752 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.752 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.752 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.752 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.753 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.753 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.753 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.753 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.753 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.754 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.754 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.754 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.754 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.755 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.755 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.755 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.755 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.755 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.756 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.756 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.756 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.757 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.757 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.757 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.757 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.757 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.758 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.758 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.758 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.758 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.759 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.759 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.759 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.759 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.760 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.760 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.760 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.760 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.761 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.761 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.761 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.761 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.761 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.762 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.762 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.762 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.762 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.763 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.763 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.763 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.763 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.764 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.764 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.764 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.764 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.765 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.765 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.765 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.765 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.765 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.766 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.766 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.766 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.766 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.766 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.767 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.767 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.767 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.767 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.767 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.767 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.768 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.768 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.768 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.768 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.768 256090 DEBUG oslo_service.service [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.769 256090 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 23 16:01:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:49 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.804 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.805 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.805 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 23 16:01:49 np0005532761 nova_compute[256079]: 2025-11-23 21:01:49.805 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 23 16:01:49 np0005532761 systemd[1]: Starting libvirt QEMU daemon...
Nov 23 16:01:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:49.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:49 np0005532761 systemd[1]: Started libvirt QEMU daemon.
Nov 23 16:01:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:50 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538002c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.010 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff49e9dd6d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.013 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff49e9dd6d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.014 256090 INFO nova.virt.libvirt.driver [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.031 256090 WARNING nova.virt.libvirt.driver [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.032 256090 DEBUG nova.virt.libvirt.volume.mount [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 23 16:01:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:50 np0005532761 python3.9[256966]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 16:01:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:50 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:01:50 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:01:50 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.919 256090 INFO nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <host>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <uuid>96c43856-d9f2-4184-a050-b9dc5065d3a6</uuid>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <cpu>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <arch>x86_64</arch>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model>EPYC-Rome-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <vendor>AMD</vendor>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <microcode version='16777317'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <signature family='23' model='49' stepping='0'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='x2apic'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='tsc-deadline'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='osxsave'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='hypervisor'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='tsc_adjust'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='spec-ctrl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='stibp'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='arch-capabilities'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='ssbd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='cmp_legacy'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='topoext'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='virt-ssbd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='lbrv'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='tsc-scale'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='vmcb-clean'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='pause-filter'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='pfthreshold'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='svme-addr-chk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='rdctl-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='skip-l1dfl-vmentry'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='mds-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature name='pschange-mc-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <pages unit='KiB' size='4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <pages unit='KiB' size='2048'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <pages unit='KiB' size='1048576'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </cpu>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <power_management>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <suspend_mem/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </power_management>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <iommu support='no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <migration_features>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <live/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <uri_transports>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <uri_transport>tcp</uri_transport>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <uri_transport>rdma</uri_transport>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </uri_transports>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </migration_features>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <topology>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <cells num='1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <cell id='0'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          <memory unit='KiB'>7864312</memory>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          <pages unit='KiB' size='4'>1966078</pages>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          <pages unit='KiB' size='2048'>0</pages>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          <distances>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <sibling id='0' value='10'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          </distances>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          <cpus num='8'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:          </cpus>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        </cell>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </cells>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </topology>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <cache>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </cache>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <secmodel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model>selinux</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <doi>0</doi>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </secmodel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <secmodel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model>dac</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <doi>0</doi>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </secmodel>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  </host>
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <guest>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <os_type>hvm</os_type>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <arch name='i686'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <wordsize>32</wordsize>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <domain type='qemu'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <domain type='kvm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </arch>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <features>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <pae/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <nonpae/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <apic default='on' toggle='no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <cpuselection/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <deviceboot/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <externalSnapshot/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </features>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  </guest>
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <guest>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <os_type>hvm</os_type>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <arch name='x86_64'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <wordsize>64</wordsize>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <domain type='qemu'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <domain type='kvm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </arch>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <features>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <apic default='on' toggle='no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <cpuselection/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <deviceboot/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <externalSnapshot/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </features>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  </guest>
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 
Nov 23 16:01:50 np0005532761 nova_compute[256079]: </capabilities>
Nov 23 16:01:50 np0005532761 nova_compute[256079]: #033[00m
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.925 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:50 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.951 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 16:01:50 np0005532761 nova_compute[256079]: <domainCapabilities>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <domain>kvm</domain>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <arch>i686</arch>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <vcpu max='4096'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <iothreads supported='yes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <os supported='yes'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <enum name='firmware'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <loader supported='yes'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>rom</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>pflash</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <enum name='readonly'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>yes</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <enum name='secure'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </loader>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  </os>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:  <cpu>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <enum name='maximumMigratable'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <vendor>AMD</vendor>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='succor'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:    <mode name='custom' supported='yes'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Denverton'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx10'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx10-128'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx10-256'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx10-512'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='SierraForest'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Snowridge'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </cpu>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <memoryBacking supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <enum name='sourceType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>anonymous</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>memfd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </memoryBacking>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <disk supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='diskDevice'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>disk</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cdrom</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>floppy</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>lun</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>fdc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>sata</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </disk>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <graphics supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vnc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egl-headless</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </graphics>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <video supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='modelType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vga</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cirrus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>none</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>bochs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ramfb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </video>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hostdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='mode'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>subsystem</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='startupPolicy'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>mandatory</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>requisite</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>optional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='subsysType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pci</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='capsType'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='pciBackend'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hostdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <rng supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>random</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </rng>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <filesystem supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='driverType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>path</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>handle</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtiofs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </filesystem>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <tpm supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-tis</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-crb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emulator</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>external</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendVersion'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>2.0</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </tpm>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <redirdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </redirdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <channel supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </channel>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <crypto supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </crypto>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <interface supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>passt</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </interface>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <panic supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>isa</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>hyperv</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </panic>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <console supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>null</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dev</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pipe</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stdio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>udp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tcp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu-vdagent</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </console>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <gic supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <genid supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backup supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <async-teardown supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <ps2 supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sev supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sgx supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hyperv supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='features'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>relaxed</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vapic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>spinlocks</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vpindex</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>runtime</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>synic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stimer</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reset</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vendor_id</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>frequencies</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reenlightenment</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tlbflush</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ipi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>avic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emsr_bitmap</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>xmm_input</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hyperv>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <launchSecurity supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='sectype'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tdx</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </launchSecurity>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: </domainCapabilities>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.957 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 16:01:51 np0005532761 nova_compute[256079]: <domainCapabilities>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <domain>kvm</domain>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <arch>i686</arch>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <vcpu max='240'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <iothreads supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <os supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <enum name='firmware'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <loader supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>rom</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pflash</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='readonly'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>yes</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='secure'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </loader>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </os>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <cpu>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='maximumMigratable'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <vendor>AMD</vendor>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='succor'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='custom' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-128'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-256'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-512'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SierraForest'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </cpu>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <memoryBacking supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <enum name='sourceType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>anonymous</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>memfd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </memoryBacking>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <disk supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='diskDevice'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>disk</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cdrom</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>floppy</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>lun</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ide</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>fdc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>sata</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </disk>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <graphics supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vnc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egl-headless</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </graphics>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <video supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='modelType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vga</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cirrus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>none</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>bochs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ramfb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </video>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hostdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='mode'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>subsystem</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='startupPolicy'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>mandatory</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>requisite</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>optional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='subsysType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pci</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='capsType'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='pciBackend'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hostdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <rng supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>random</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </rng>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <filesystem supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='driverType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>path</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>handle</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtiofs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </filesystem>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <tpm supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-tis</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-crb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emulator</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>external</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendVersion'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>2.0</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </tpm>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <redirdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </redirdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <channel supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </channel>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <crypto supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </crypto>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <interface supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>passt</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </interface>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <panic supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>isa</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>hyperv</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </panic>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <console supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>null</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dev</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pipe</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stdio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>udp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tcp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu-vdagent</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </console>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <gic supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <genid supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backup supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <async-teardown supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <ps2 supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sev supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sgx supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hyperv supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='features'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>relaxed</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vapic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>spinlocks</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vpindex</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>runtime</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>synic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stimer</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reset</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vendor_id</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>frequencies</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reenlightenment</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tlbflush</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ipi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>avic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emsr_bitmap</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>xmm_input</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hyperv>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <launchSecurity supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='sectype'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tdx</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </launchSecurity>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: </domainCapabilities>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.981 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:50.985 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 16:01:51 np0005532761 nova_compute[256079]: <domainCapabilities>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <domain>kvm</domain>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <arch>x86_64</arch>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <vcpu max='4096'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <iothreads supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <os supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <enum name='firmware'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>efi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <loader supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>rom</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pflash</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='readonly'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>yes</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='secure'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>yes</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </loader>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </os>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <cpu>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='maximumMigratable'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <vendor>AMD</vendor>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='succor'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='custom' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-128'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-256'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-512'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SierraForest'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </cpu>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <memoryBacking supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <enum name='sourceType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>anonymous</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>memfd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </memoryBacking>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <disk supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='diskDevice'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>disk</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cdrom</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>floppy</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>lun</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>fdc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>sata</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </disk>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <graphics supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vnc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egl-headless</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </graphics>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <video supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='modelType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vga</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cirrus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>none</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>bochs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ramfb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </video>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hostdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='mode'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>subsystem</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='startupPolicy'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>mandatory</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>requisite</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>optional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='subsysType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pci</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='capsType'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='pciBackend'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hostdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <rng supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>random</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </rng>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <filesystem supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='driverType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>path</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>handle</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtiofs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </filesystem>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <tpm supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-tis</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-crb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emulator</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>external</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendVersion'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>2.0</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </tpm>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <redirdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </redirdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <channel supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </channel>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <crypto supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </crypto>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <interface supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>passt</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </interface>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <panic supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>isa</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>hyperv</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </panic>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <console supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>null</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dev</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pipe</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stdio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>udp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tcp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu-vdagent</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </console>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <gic supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <genid supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backup supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <async-teardown supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <ps2 supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sev supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sgx supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hyperv supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='features'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>relaxed</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vapic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>spinlocks</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vpindex</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>runtime</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>synic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stimer</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reset</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vendor_id</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>frequencies</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reenlightenment</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tlbflush</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ipi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>avic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emsr_bitmap</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>xmm_input</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hyperv>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <launchSecurity supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='sectype'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tdx</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </launchSecurity>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: </domainCapabilities>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.045 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 16:01:51 np0005532761 nova_compute[256079]: <domainCapabilities>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <domain>kvm</domain>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <arch>x86_64</arch>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <vcpu max='240'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <iothreads supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <os supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <enum name='firmware'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <loader supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>rom</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pflash</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='readonly'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>yes</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='secure'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>no</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </loader>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </os>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <cpu>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='maximumMigratable'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>on</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>off</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <vendor>AMD</vendor>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='succor'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <mode name='custom' supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Denverton-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='auto-ibrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amd-psfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='stibp-always-on'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='EPYC-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-128'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-256'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx10-512'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='prefetchiti'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Haswell-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512er'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512pf'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fma4'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tbm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xop'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='amx-tile'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-bf16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-fp16'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bitalg'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrc'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fzrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='la57'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='taa-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xfd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SierraForest'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ifma'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cmpccxadd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fbsdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='fsrs'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ibrs-all'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mcdt-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pbrsb-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='psdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='serialize'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vaes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='hle'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='rtm'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512bw'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512cd'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512dq'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512f'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='avx512vl'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='invpcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pcid'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='pku'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='mpx'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='core-capability'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='split-lock-detect'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='cldemote'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='erms'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='gfni'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdir64b'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='movdiri'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='xsaves'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='athlon-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='core2duo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='coreduo-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='n270-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='ss'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <blockers model='phenom-v1'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnow'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <feature name='3dnowext'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </blockers>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </mode>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </cpu>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <memoryBacking supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <enum name='sourceType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>anonymous</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <value>memfd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </memoryBacking>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <disk supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='diskDevice'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>disk</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cdrom</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>floppy</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>lun</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ide</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>fdc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>sata</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </disk>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <graphics supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vnc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egl-headless</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </graphics>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <video supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='modelType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vga</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>cirrus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>none</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>bochs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ramfb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </video>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hostdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='mode'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>subsystem</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='startupPolicy'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>mandatory</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>requisite</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>optional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='subsysType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pci</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>scsi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='capsType'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='pciBackend'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hostdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <rng supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtio-non-transitional</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>random</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>egd</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </rng>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <filesystem supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='driverType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>path</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>handle</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>virtiofs</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </filesystem>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <tpm supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-tis</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tpm-crb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emulator</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>external</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendVersion'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>2.0</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </tpm>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <redirdev supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='bus'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>usb</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </redirdev>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <channel supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </channel>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <crypto supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendModel'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>builtin</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </crypto>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <interface supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='backendType'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>default</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>passt</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </interface>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <panic supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='model'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>isa</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>hyperv</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </panic>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <console supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='type'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>null</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vc</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pty</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dev</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>file</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>pipe</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stdio</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>udp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tcp</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>unix</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>qemu-vdagent</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>dbus</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </console>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </devices>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  <features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <gic supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <genid supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <backup supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <async-teardown supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <ps2 supported='yes'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sev supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <sgx supported='no'/>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <hyperv supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='features'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>relaxed</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vapic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>spinlocks</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vpindex</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>runtime</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>synic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>stimer</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reset</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>vendor_id</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>frequencies</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>reenlightenment</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tlbflush</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>ipi</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>avic</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>emsr_bitmap</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>xmm_input</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </defaults>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </hyperv>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    <launchSecurity supported='yes'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      <enum name='sectype'>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:        <value>tdx</value>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:      </enum>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:    </launchSecurity>
Nov 23 16:01:51 np0005532761 nova_compute[256079]:  </features>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: </domainCapabilities>
Nov 23 16:01:51 np0005532761 nova_compute[256079]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.114 256090 DEBUG nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.114 256090 INFO nova.virt.libvirt.host [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Secure Boot support detected#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.115 256090 INFO nova.virt.libvirt.driver [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.116 256090 INFO nova.virt.libvirt.driver [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.123 256090 DEBUG nova.virt.libvirt.driver [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.140 256090 INFO nova.virt.node [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Determined node identity 5c6a407d-d270-4df1-a24d-91d09c3ff1cd from /var/lib/nova/compute_id#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.150 256090 WARNING nova.compute.manager [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Compute nodes ['5c6a407d-d270-4df1-a24d-91d09c3ff1cd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.168 256090 INFO nova.compute.manager [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.194 256090 WARNING nova.compute.manager [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.195 256090 DEBUG oslo_concurrency.lockutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.195 256090 DEBUG oslo_concurrency.lockutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.195 256090 DEBUG oslo_concurrency.lockutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.195 256090 DEBUG nova.compute.resource_tracker [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.196 256090 DEBUG oslo_concurrency.processutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:01:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:51 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2514001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:01:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2496637839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.662 256090 DEBUG oslo_concurrency.processutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:01:51 np0005532761 systemd[1]: Starting libvirt nodedev daemon...
Nov 23 16:01:51 np0005532761 systemd[1]: Started libvirt nodedev daemon.
Nov 23 16:01:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:51 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:01:51.857 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:01:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:01:51.858 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:01:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:01:51.858 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:01:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:51.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.998 256090 WARNING nova.virt.libvirt.driver [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.999 256090 DEBUG nova.compute.resource_tracker [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4953MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:01:51 np0005532761 nova_compute[256079]: 2025-11-23 21:01:51.999 256090 DEBUG oslo_concurrency.lockutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.000 256090 DEBUG oslo_concurrency.lockutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:01:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:52 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.011 256090 WARNING nova.compute.resource_tracker [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] No compute node record for compute-0.ctlplane.example.com:5c6a407d-d270-4df1-a24d-91d09c3ff1cd: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 5c6a407d-d270-4df1-a24d-91d09c3ff1cd could not be found.#033[00m
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.032 256090 INFO nova.compute.resource_tracker [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd#033[00m
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.073 256090 DEBUG nova.compute.resource_tracker [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.073 256090 DEBUG nova.compute.resource_tracker [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:01:52 np0005532761 python3.9[257194]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 16:01:52 np0005532761 systemd[1]: Stopping nova_compute container...
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.211 256090 DEBUG oslo_concurrency.lockutils [None req-1bf002ca-8dc8-4347-9c5c-de9694489352 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.212 256090 DEBUG oslo_concurrency.lockutils [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.212 256090 DEBUG oslo_concurrency.lockutils [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:01:52 np0005532761 nova_compute[256079]: 2025-11-23 21:01:52.212 256090 DEBUG oslo_concurrency.lockutils [None req-7b2679e4-9886-440f-b8f7-2f3f2a02327b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:01:53 np0005532761 virtqemud[256805]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 23 16:01:53 np0005532761 systemd[1]: libpod-9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d.scope: Deactivated successfully.
Nov 23 16:01:53 np0005532761 virtqemud[256805]: hostname: compute-0
Nov 23 16:01:53 np0005532761 virtqemud[256805]: End of file while reading data: Input/output error
Nov 23 16:01:53 np0005532761 systemd[1]: libpod-9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d.scope: Consumed 4.117s CPU time.
Nov 23 16:01:53 np0005532761 podman[257200]: 2025-11-23 21:01:53.00911393 +0000 UTC m=+0.842633525 container died 9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 23 16:01:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:01:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:53 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d-userdata-shm.mount: Deactivated successfully.
Nov 23 16:01:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay-0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722-merged.mount: Deactivated successfully.
Nov 23 16:01:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:53 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2514001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:53.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:53 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:01:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:53 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:01:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:54 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 16:01:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:55.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:55 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:55 np0005532761 podman[257200]: 2025-11-23 21:01:55.487483834 +0000 UTC m=+3.321003429 container cleanup 9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:01:55 np0005532761 podman[257200]: nova_compute
Nov 23 16:01:55 np0005532761 podman[257235]: nova_compute
Nov 23 16:01:55 np0005532761 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 23 16:01:55 np0005532761 systemd[1]: Stopped nova_compute container.
Nov 23 16:01:55 np0005532761 systemd[1]: Starting nova_compute container...
Nov 23 16:01:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f604d195c7406ce6b9304467ebee2d1c2d0f12e4b8f03aedebcb91852722/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:55 np0005532761 podman[257248]: 2025-11-23 21:01:55.717250282 +0000 UTC m=+0.109785323 container init 9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 23 16:01:55 np0005532761 podman[257248]: 2025-11-23 21:01:55.727076666 +0000 UTC m=+0.119611687 container start 9b05eade4ff6a41835c7a4253eeb81f32c705fc7cdd92f3de8eccb1ad046816d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:01:55 np0005532761 podman[257248]: nova_compute
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + sudo -E kolla_set_configs
Nov 23 16:01:55 np0005532761 systemd[1]: Started nova_compute container.
Nov 23 16:01:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:55 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Validating config file
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying service configuration files
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /etc/ceph
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Creating directory /etc/ceph
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Writing out command to execute
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:55 np0005532761 nova_compute[257263]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:55 np0005532761 nova_compute[257263]: ++ cat /run_command
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + CMD=nova-compute
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + ARGS=
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + sudo kolla_copy_cacerts
Nov 23 16:01:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + [[ ! -n '' ]]
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + . kolla_extend_start
Nov 23 16:01:55 np0005532761 nova_compute[257263]: Running command: 'nova-compute'
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + umask 0022
Nov 23 16:01:55 np0005532761 nova_compute[257263]: + exec nova-compute
Nov 23 16:01:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:56 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2514001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:56 np0005532761 python3.9[257426]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 16:01:56 np0005532761 systemd[1]: Started libpod-conmon-ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef.scope.
Nov 23 16:01:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:01:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26144f4eccbe65a80d6101ecfd5465f99ed5d40ae12ea6f5f6029a4b988539c8/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26144f4eccbe65a80d6101ecfd5465f99ed5d40ae12ea6f5f6029a4b988539c8/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:56 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26144f4eccbe65a80d6101ecfd5465f99ed5d40ae12ea6f5f6029a4b988539c8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:56 np0005532761 podman[257453]: 2025-11-23 21:01:56.888751981 +0000 UTC m=+0.162062865 container init ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:01:56 np0005532761 podman[257453]: 2025-11-23 21:01:56.895458251 +0000 UTC m=+0.168769115 container start ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Applying nova statedir ownership
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 23 16:01:56 np0005532761 nova_compute_init[257473]: INFO:nova_statedir:Nova statedir ownership complete
Nov 23 16:01:56 np0005532761 systemd[1]: libpod-ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef.scope: Deactivated successfully.
Nov 23 16:01:57 np0005532761 python3.9[257426]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 23 16:01:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 16:01:57 np0005532761 podman[257475]: 2025-11-23 21:01:57.016143935 +0000 UTC m=+0.056780913 container died ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 16:01:57 np0005532761 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef-userdata-shm.mount: Deactivated successfully.
Nov 23 16:01:57 np0005532761 systemd[1]: var-lib-containers-storage-overlay-26144f4eccbe65a80d6101ecfd5465f99ed5d40ae12ea6f5f6029a4b988539c8-merged.mount: Deactivated successfully.
Nov 23 16:01:57 np0005532761 podman[257475]: 2025-11-23 21:01:57.058518021 +0000 UTC m=+0.099154989 container cleanup ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 16:01:57 np0005532761 systemd[1]: libpod-conmon-ab9a4dc52e0d773251047a8d519e166b76aa9226ad29ce21b69e36867fba09ef.scope: Deactivated successfully.
Nov 23 16:01:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:57.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:01:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:57.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:01:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:01:57.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:01:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:57.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:57] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 16:01:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:01:57] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 16:01:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:57 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2518003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:57 np0005532761 nova_compute[257263]: 2025-11-23 21:01:57.857 257267 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:57 np0005532761 systemd[1]: session-54.scope: Deactivated successfully.
Nov 23 16:01:57 np0005532761 systemd[1]: session-54.scope: Consumed 2min 14.666s CPU time.
Nov 23 16:01:57 np0005532761 nova_compute[257263]: 2025-11-23 21:01:57.858 257267 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:57 np0005532761 nova_compute[257263]: 2025-11-23 21:01:57.860 257267 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:57 np0005532761 systemd-logind[820]: Session 54 logged out. Waiting for processes to exit.
Nov 23 16:01:57 np0005532761 nova_compute[257263]: 2025-11-23 21:01:57.860 257267 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 23 16:01:57 np0005532761 systemd-logind[820]: Removed session 54.
Nov 23 16:01:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:01:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:57 np0005532761 nova_compute[257263]: 2025-11-23 21:01:57.998 257267 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:01:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:58 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.023 257267 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.024 257267 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.460 257267 INFO nova.virt.driver [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 23 16:01:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:01:58 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.588 257267 INFO nova.compute.provider_config [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.598 257267 DEBUG oslo_concurrency.lockutils [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.599 257267 DEBUG oslo_concurrency.lockutils [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.599 257267 DEBUG oslo_concurrency.lockutils [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.600 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.600 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.600 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.600 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.600 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.600 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.601 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.601 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.601 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.601 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.601 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.601 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.602 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.602 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.602 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.602 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.602 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.602 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.603 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.603 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.603 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.603 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.603 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.604 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.604 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.604 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.604 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.604 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.605 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.605 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.605 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.605 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.606 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.606 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.607 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.607 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.607 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.607 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.607 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.607 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.608 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.608 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.608 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.608 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.609 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.609 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.609 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.609 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.609 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.610 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.610 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.610 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.610 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.610 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.611 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.611 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.611 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.611 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.611 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.612 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.612 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.612 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.612 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.612 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.613 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.613 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.613 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.613 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.613 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.613 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.614 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.614 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.614 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.614 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.614 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.615 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.615 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.615 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.615 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.615 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.616 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.616 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.616 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.616 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.616 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.617 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.617 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.617 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.617 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.617 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.617 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.618 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.618 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.618 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.618 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.618 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.619 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.619 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.619 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.619 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.619 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.619 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.620 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.620 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.620 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.620 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.620 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.621 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.621 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.621 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.621 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.621 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.622 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.622 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.622 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.622 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.622 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.622 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.623 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.623 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.623 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.623 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.623 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.624 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.624 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.624 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.624 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.624 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.624 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.625 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.625 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.625 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.625 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.625 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.626 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.626 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.626 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.626 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.626 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.627 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.627 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.627 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.627 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.627 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.628 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.628 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.628 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.628 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.629 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.629 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.629 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.629 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.629 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.630 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.630 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.630 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.630 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.630 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.631 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.631 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.631 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.631 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.631 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.632 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.632 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.632 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.632 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.632 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.633 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.633 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.633 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.633 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.633 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.633 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.634 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.634 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.634 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.634 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.634 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.635 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.635 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.635 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.635 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.636 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.636 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.636 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.636 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.636 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.637 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.637 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.637 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.637 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.637 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.638 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.638 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.638 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.638 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.638 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.638 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.639 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.639 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.639 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.639 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.639 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.640 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.640 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.640 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.640 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.640 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.641 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.641 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.641 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.641 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.641 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.641 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.642 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.642 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.642 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.642 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.642 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.643 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.643 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.643 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.643 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.643 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.644 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.644 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.644 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.644 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.644 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.645 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.645 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.645 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.645 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.645 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.645 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.646 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.646 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.646 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.646 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.646 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.647 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.647 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.647 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.647 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.647 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.648 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.648 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.648 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.648 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.648 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.649 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.649 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.649 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.649 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.649 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.650 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.650 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.650 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.650 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.650 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.650 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.651 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.651 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.651 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.651 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.651 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.652 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.652 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.652 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.652 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.652 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.653 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.653 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.653 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.653 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.653 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.653 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.654 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.654 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.654 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.654 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.654 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.655 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.655 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.655 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.655 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.655 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.656 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.656 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.656 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.656 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.656 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.656 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.657 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.657 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.657 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.657 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.657 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.658 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.658 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.658 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.658 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.658 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.659 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.659 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.659 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.659 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.659 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.660 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.660 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.660 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.660 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.660 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.661 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.661 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.661 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.661 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.661 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.661 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.662 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.662 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.662 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.662 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.662 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.663 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.663 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.663 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.663 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.663 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.664 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.664 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.664 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.664 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.664 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.665 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.665 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.665 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.665 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.665 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.665 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.666 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.666 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.666 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.666 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.666 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.667 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.667 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.667 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.667 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.667 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.668 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.668 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.668 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.668 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.669 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.669 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.669 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.669 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.669 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.669 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.670 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.670 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.670 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.670 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.670 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.671 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.671 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.671 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.671 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.671 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.671 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.672 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.672 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.672 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.672 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.672 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.673 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.673 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.673 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.673 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.673 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.673 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.674 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.674 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.674 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.674 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.674 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.675 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.675 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.675 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.676 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.676 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.676 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.676 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.676 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.677 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.677 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.677 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.677 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.677 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.678 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.678 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.678 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.678 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.678 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.678 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.679 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.679 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.679 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.679 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.679 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.680 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.680 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.680 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.680 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.680 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.681 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.681 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.681 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.681 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.681 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.681 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.682 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.682 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.682 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.682 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.682 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.683 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.683 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.683 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.683 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.683 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.684 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.684 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.684 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.684 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.684 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.685 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.685 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.685 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.685 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.685 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.686 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.686 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.686 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.686 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.686 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.687 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.687 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.687 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.687 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.687 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.688 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.688 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.688 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.688 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.688 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.689 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.689 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.689 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.689 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.689 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.689 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.690 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.690 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.690 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.690 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.690 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.691 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.691 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.691 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.691 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.691 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.692 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.692 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.692 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.692 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.692 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.693 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.693 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.693 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.693 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.693 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.694 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.694 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.694 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.694 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.694 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.694 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.695 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.695 257267 WARNING oslo_config.cfg [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 16:01:58 np0005532761 nova_compute[257263]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 16:01:58 np0005532761 nova_compute[257263]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 16:01:58 np0005532761 nova_compute[257263]: and ``live_migration_inbound_addr`` respectively.
Nov 23 16:01:58 np0005532761 nova_compute[257263]: ).  Its value may be silently ignored in the future.#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.695 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.695 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.696 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.696 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.696 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.696 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.696 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.697 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.697 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.697 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.697 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.697 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.698 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.698 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.698 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.698 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.698 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.698 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.699 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rbd_secret_uuid        = 03808be8-ae4a-5548-82e6-4a294f1bc627 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.699 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.699 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.699 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.700 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.700 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.700 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.700 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.700 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.701 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.701 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.701 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.701 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.701 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.702 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.702 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.702 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.702 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.702 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.702 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.703 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.703 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.703 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.703 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.703 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.704 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.704 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.704 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.704 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.705 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.705 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.705 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.705 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.705 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.706 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.706 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.706 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.706 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.706 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.706 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.707 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.707 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.707 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.707 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.707 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.708 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.708 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.708 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.708 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.708 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.708 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.709 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.709 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.709 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.709 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.709 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.710 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.710 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.710 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.710 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.711 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.711 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.711 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.711 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.712 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.712 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.712 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.712 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.712 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.713 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.713 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.713 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.713 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.713 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.713 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.714 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.714 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.714 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.714 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.714 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.715 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.715 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.715 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.715 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.715 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.715 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.716 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.716 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.716 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.716 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.716 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.717 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.717 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.717 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.717 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.717 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.718 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.718 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.718 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.718 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.718 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.719 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.719 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.719 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.719 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.719 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.719 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.720 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.720 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.720 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.720 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.720 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.721 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.721 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.721 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.721 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.721 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.722 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.722 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.722 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.722 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.722 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.723 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.723 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.723 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.723 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.723 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.724 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.724 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.724 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.724 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.724 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.724 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.725 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.725 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.725 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.725 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.725 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.726 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.726 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.726 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.726 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.726 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.727 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.727 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.727 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.727 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.727 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.728 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.728 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.728 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.728 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.728 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.728 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.729 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.729 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.729 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.729 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.729 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.730 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.730 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.730 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.730 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.730 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.731 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.731 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.731 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.731 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.731 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.732 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.732 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.732 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.732 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.732 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.733 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.733 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.733 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.733 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.733 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.734 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.734 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.734 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.734 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.734 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.735 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.735 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.735 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.735 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.735 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.735 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.736 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.736 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.736 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.736 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.736 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.737 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.737 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.737 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.737 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.737 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.738 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.738 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.738 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.738 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.738 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.739 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.739 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.739 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.739 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.739 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.739 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.740 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.740 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.740 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.740 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.740 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.741 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.741 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.741 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.741 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.741 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.741 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.742 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.742 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.742 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.742 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.742 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.743 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.743 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.743 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.743 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.743 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.744 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.744 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.744 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.744 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.744 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.745 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.745 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.745 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.745 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.746 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.746 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.746 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.746 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.746 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.746 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.747 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.747 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.747 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.747 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.747 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.748 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.748 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.748 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.748 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.748 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.749 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.749 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.749 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.749 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.749 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.750 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.750 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.750 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.750 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.750 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.751 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.751 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.751 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.751 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.751 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.752 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.752 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.752 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.752 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.752 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.753 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.753 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.753 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.753 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.753 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.753 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.754 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.754 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.754 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.754 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.754 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.755 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.755 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.755 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.755 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.755 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.756 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.756 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.756 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.756 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.756 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.756 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.757 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.757 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.757 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.757 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.757 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.758 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.758 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.758 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.758 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.758 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.759 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.759 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.759 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.759 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.759 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.759 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.760 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.760 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.760 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.760 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.760 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.761 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.761 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.761 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.761 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.761 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.762 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.762 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.762 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.762 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.762 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.763 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.763 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.763 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.763 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.763 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.763 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.764 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.764 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.764 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.764 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.764 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.765 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.765 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.765 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.765 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.765 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.765 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.766 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.766 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.766 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.766 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.766 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.767 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.767 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.767 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.767 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.767 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.767 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.768 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.768 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.768 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.768 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.768 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.769 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.769 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.769 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.769 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.769 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.770 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.770 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.770 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.770 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.770 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.770 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.771 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.771 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.771 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.771 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.771 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.772 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.772 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.772 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.772 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.772 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.773 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.773 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.773 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.773 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.773 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.773 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.774 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.774 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.774 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.774 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.774 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.775 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.775 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.775 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.775 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.775 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.776 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.776 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.776 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.776 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.776 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.777 257267 DEBUG oslo_service.service [None req-da9f1c70-724c-45ed-8a73-cdc33a84cc9b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.778 257267 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.789 257267 INFO nova.virt.node [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Determined node identity 5c6a407d-d270-4df1-a24d-91d09c3ff1cd from /var/lib/nova/compute_id#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.790 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.790 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.791 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.791 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.801 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fda0a8027c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.802 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fda0a8027c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.803 257267 INFO nova.virt.libvirt.driver [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.810 257267 INFO nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <host>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <uuid>96c43856-d9f2-4184-a050-b9dc5065d3a6</uuid>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <cpu>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <arch>x86_64</arch>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model>EPYC-Rome-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <vendor>AMD</vendor>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <microcode version='16777317'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <signature family='23' model='49' stepping='0'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='x2apic'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='tsc-deadline'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='osxsave'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='hypervisor'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='tsc_adjust'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='spec-ctrl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='stibp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='arch-capabilities'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='cmp_legacy'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='topoext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='virt-ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='lbrv'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='tsc-scale'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='vmcb-clean'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='pause-filter'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='pfthreshold'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='svme-addr-chk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='rdctl-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='skip-l1dfl-vmentry'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='mds-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature name='pschange-mc-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <pages unit='KiB' size='4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <pages unit='KiB' size='2048'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <pages unit='KiB' size='1048576'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </cpu>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <power_management>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <suspend_mem/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </power_management>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <iommu support='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <migration_features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <live/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <uri_transports>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <uri_transport>tcp</uri_transport>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <uri_transport>rdma</uri_transport>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </uri_transports>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </migration_features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <topology>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <cells num='1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <cell id='0'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          <memory unit='KiB'>7864312</memory>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          <pages unit='KiB' size='4'>1966078</pages>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          <pages unit='KiB' size='2048'>0</pages>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          <distances>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <sibling id='0' value='10'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          </distances>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          <cpus num='8'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:          </cpus>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        </cell>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </cells>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </topology>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <cache>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </cache>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <secmodel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model>selinux</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <doi>0</doi>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </secmodel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <secmodel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model>dac</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <doi>0</doi>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </secmodel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </host>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <guest>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <os_type>hvm</os_type>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <arch name='i686'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <wordsize>32</wordsize>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <domain type='qemu'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <domain type='kvm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </arch>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <pae/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <nonpae/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <apic default='on' toggle='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <cpuselection/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <deviceboot/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <externalSnapshot/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </guest>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <guest>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <os_type>hvm</os_type>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <arch name='x86_64'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <wordsize>64</wordsize>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <domain type='qemu'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <domain type='kvm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </arch>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <apic default='on' toggle='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <cpuselection/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <deviceboot/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <externalSnapshot/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </guest>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 
Nov 23 16:01:58 np0005532761 nova_compute[257263]: </capabilities>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: #033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.815 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.819 257267 DEBUG nova.virt.libvirt.volume.mount [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.820 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 16:01:58 np0005532761 nova_compute[257263]: <domainCapabilities>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <domain>kvm</domain>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <arch>i686</arch>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <vcpu max='240'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <iothreads supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <os supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <enum name='firmware'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <loader supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>rom</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pflash</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='readonly'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>yes</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>no</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='secure'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>no</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </loader>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </os>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <cpu>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>on</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>off</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='maximumMigratable'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>on</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>off</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <vendor>AMD</vendor>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='succor'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='custom' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amd-psfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='auto-ibrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='stibp-always-on'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amd-psfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='auto-ibrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='stibp-always-on'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amd-psfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='stibp-always-on'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='GraniteRapids'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='prefetchiti'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='prefetchiti'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10-128'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10-256'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10-512'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='prefetchiti'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='KnightsMill'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512er'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512pf'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512er'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512pf'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tbm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tbm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SierraForest'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cmpccxadd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cmpccxadd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='athlon'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='athlon-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='core2duo'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='core2duo-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='coreduo'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='coreduo-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='n270'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='n270-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='phenom'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='phenom-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </cpu>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <memoryBacking supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <enum name='sourceType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>file</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>anonymous</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>memfd</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </memoryBacking>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <devices>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <disk supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='diskDevice'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>disk</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>cdrom</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>floppy</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>lun</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='bus'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>ide</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>fdc</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>scsi</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>usb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>sata</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-non-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </disk>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <graphics supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vnc</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>egl-headless</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>dbus</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </graphics>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <video supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='modelType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vga</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>cirrus</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>none</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>bochs</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>ramfb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </video>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <hostdev supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='mode'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>subsystem</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='startupPolicy'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>default</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>mandatory</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>requisite</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>optional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='subsysType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>usb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pci</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>scsi</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='capsType'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='pciBackend'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </hostdev>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <rng supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-non-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendModel'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>random</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>egd</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>builtin</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </rng>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <filesystem supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='driverType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>path</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>handle</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtiofs</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </filesystem>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <tpm supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tpm-tis</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tpm-crb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendModel'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>emulator</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>external</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendVersion'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>2.0</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </tpm>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <redirdev supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='bus'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>usb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </redirdev>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <channel supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pty</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>unix</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </channel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <crypto supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>qemu</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendModel'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>builtin</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </crypto>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <interface supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>default</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>passt</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </interface>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <panic supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>isa</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>hyperv</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </panic>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <console supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>null</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vc</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pty</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>dev</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>file</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pipe</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>stdio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>udp</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tcp</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>unix</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>qemu-vdagent</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>dbus</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </console>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </devices>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <gic supported='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <genid supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <backup supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <async-teardown supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <ps2 supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <sev supported='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <sgx supported='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <hyperv supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='features'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>relaxed</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vapic</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>spinlocks</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vpindex</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>runtime</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>synic</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>stimer</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>reset</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vendor_id</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>frequencies</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>reenlightenment</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tlbflush</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>ipi</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>avic</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>emsr_bitmap</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>xmm_input</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <defaults>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </defaults>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </hyperv>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <launchSecurity supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='sectype'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tdx</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </launchSecurity>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: </domainCapabilities>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.827 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 16:01:58 np0005532761 nova_compute[257263]: <domainCapabilities>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <domain>kvm</domain>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <arch>i686</arch>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <vcpu max='4096'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <iothreads supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <os supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <enum name='firmware'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <loader supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>rom</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pflash</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='readonly'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>yes</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>no</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='secure'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>no</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </loader>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </os>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <cpu>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>on</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>off</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='maximumMigratable'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>on</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>off</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <vendor>AMD</vendor>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='succor'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='custom' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Denverton-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amd-psfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='auto-ibrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='stibp-always-on'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amd-psfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='auto-ibrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='stibp-always-on'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amd-psfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='stibp-always-on'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='EPYC-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='GraniteRapids'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='prefetchiti'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='prefetchiti'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10-128'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10-256'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx10-512'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='prefetchiti'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Haswell-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='KnightsMill'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512er'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512pf'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512er'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512pf'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tbm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fma4'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tbm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xop'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='amx-tile'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-fp16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bitalg'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrc'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fzrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='la57'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xfd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SierraForest'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cmpccxadd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ifma'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cmpccxadd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fbsdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='fsrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mcdt-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pbrsb-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='psdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='serialize'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vaes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='mpx'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='core-capability'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='split-lock-detect'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='cldemote'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='gfni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdir64b'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='movdiri'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='athlon'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='athlon-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='core2duo'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='core2duo-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='coreduo'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='coreduo-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='n270'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='n270-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ss'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='phenom'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='phenom-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnow'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='3dnowext'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </cpu>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <memoryBacking supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <enum name='sourceType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>file</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>anonymous</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>memfd</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </memoryBacking>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <devices>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <disk supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='diskDevice'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>disk</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>cdrom</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>floppy</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>lun</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='bus'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>fdc</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>scsi</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>usb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>sata</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-non-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </disk>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <graphics supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vnc</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>egl-headless</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>dbus</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </graphics>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <video supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='modelType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vga</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>cirrus</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>none</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>bochs</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>ramfb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </video>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <hostdev supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='mode'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>subsystem</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='startupPolicy'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>default</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>mandatory</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>requisite</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>optional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='subsysType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>usb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pci</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>scsi</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='capsType'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='pciBackend'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </hostdev>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <rng supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtio-non-transitional</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendModel'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>random</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>egd</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>builtin</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </rng>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <filesystem supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='driverType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>path</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>handle</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>virtiofs</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </filesystem>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <tpm supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tpm-tis</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tpm-crb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendModel'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>emulator</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>external</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendVersion'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>2.0</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </tpm>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <redirdev supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='bus'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>usb</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </redirdev>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <channel supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pty</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>unix</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </channel>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <crypto supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>qemu</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendModel'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>builtin</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </crypto>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <interface supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='backendType'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>default</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>passt</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </interface>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <panic supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='model'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>isa</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>hyperv</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </panic>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <console supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>null</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vc</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pty</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>dev</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>file</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pipe</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>stdio</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>udp</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tcp</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>unix</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>qemu-vdagent</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>dbus</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </console>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </devices>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <gic supported='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <genid supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <backup supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <async-teardown supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <ps2 supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <sev supported='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <sgx supported='no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <hyperv supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='features'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>relaxed</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vapic</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>spinlocks</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vpindex</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>runtime</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>synic</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>stimer</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>reset</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>vendor_id</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>frequencies</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>reenlightenment</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tlbflush</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>ipi</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>avic</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>emsr_bitmap</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>xmm_input</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <defaults>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </defaults>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </hyperv>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <launchSecurity supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='sectype'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>tdx</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </launchSecurity>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </features>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: </domainCapabilities>
Nov 23 16:01:58 np0005532761 nova_compute[257263]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.855 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:58 np0005532761 nova_compute[257263]: 2025-11-23 21:01:58.858 257267 DEBUG nova.virt.libvirt.host [None req-6963e285-0998-4010-a0b5-774520bf4960 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 16:01:58 np0005532761 nova_compute[257263]: <domainCapabilities>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <domain>kvm</domain>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <arch>x86_64</arch>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <vcpu max='240'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <iothreads supported='yes'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <os supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <enum name='firmware'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <loader supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='type'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>rom</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>pflash</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='readonly'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>yes</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>no</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='secure'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>no</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </loader>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  </os>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:  <cpu>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>on</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>off</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <enum name='maximumMigratable'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>on</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <value>off</value>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </enum>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <vendor>AMD</vendor>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='succor'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    </mode>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:    <mode name='custom' supported='yes'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='xsaves'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake'>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512-bf16'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512bw'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512cd'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512dq'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512f'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vl'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='avx512vnni'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='erms'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='hle'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='ibrs-all'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='invpcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pcid'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='pku'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='rtm'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:        <feature name='taa-no'/>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      </blockers>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:58 np0005532761 nova_compute[257263]:      <blockers model='Cooperlake-v1'>
Nov 23 16:03:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:11.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:12 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:12 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:12 np0005532761 rsyslogd[1006]: imjournal: 3892 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 23 16:03:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:03:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:13.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:13 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:13.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:14 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:14 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:14 np0005532761 podman[258707]: 2025-11-23 21:03:14.53502931 +0000 UTC m=+0.056787763 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd)
Nov 23 16:03:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:03:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:15.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:15 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:16 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:16 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:03:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:03:17.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:03:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:17.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:17 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:03:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:17] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:03:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:17.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:18 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:18 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:03:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:03:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:18 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:03:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:03:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:19.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:19 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:19.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:20 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2524002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:20 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 16:03:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:03:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:21 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:03:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:22 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:22 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25200019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 16:03:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:23.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:23 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:23.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:24 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:24 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:24 : epoch 692375cd : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:03:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:03:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:25.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:25 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25200019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:03:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:25.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:03:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:26 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:26 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:03:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:03:27.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:03:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:27.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:27 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:27] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:03:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:27] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:03:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:28 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:28 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:28 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.24556 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 23 16:03:28 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 23 16:03:28 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 23 16:03:28 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 23 16:03:28 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 23 16:03:28 np0005532761 ceph-mgr[74869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 23 16:03:28 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 23 16:03:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:03:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:29.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:29 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:30 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:30 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210330 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:03:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 16:03:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:31.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:31 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:31.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:32 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:32 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 16:03:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:03:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:03:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:03:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:03:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:03:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:03:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:03:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:03:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:33.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:33 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:34 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:34 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:34 np0005532761 podman[258777]: 2025-11-23 21:03:34.54109357 +0000 UTC m=+0.055639671 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 16:03:34 np0005532761 podman[258776]: 2025-11-23 21:03:34.557525221 +0000 UTC m=+0.078678710 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 16:03:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 16:03:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:35.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:35 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:35.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:36 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:36 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:03:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:03:37.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:03:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:03:37.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:03:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:37.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:37 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:03:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:03:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:37.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:38 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:38 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:03:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:39 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:39.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:40 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:40 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:03:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:41.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:41 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:41.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:42 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:42 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:43 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:43.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:44 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:44 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:45 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:45 np0005532761 podman[258857]: 2025-11-23 21:03:45.57577818 +0000 UTC m=+0.081686350 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 16:03:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:45.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:46 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:46 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:03:47.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:03:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:47 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:47.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:03:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:03:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:47.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:48 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:48 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f253c004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:03:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:03:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:49 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2520001ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:49.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:49.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:50 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f25180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[241799]: 23/11/2025 21:03:50 : epoch 692375cd : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2538004320 fd 48 proxy ignored for local
Nov 23 16:03:50 np0005532761 kernel: ganesha.nfsd[258285]: segfault at 50 ip 00007f25ee94e32e sp 00007f25a3ffe210 error 4 in libntirpc.so.5.8[7f25ee933000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 23 16:03:50 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:03:50 np0005532761 systemd[1]: Started Process Core Dump (PID 258885/UID 0).
Nov 23 16:03:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 16:03:51 np0005532761 systemd-coredump[258886]: Process 241803 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007f25ee94e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:03:51 np0005532761 systemd[1]: systemd-coredump@8-258885-0.service: Deactivated successfully.
Nov 23 16:03:51 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:03:51 np0005532761 podman[258894]: 2025-11-23 21:03:51.246253978 +0000 UTC m=+0.029041678 container died fd41d73a51fb4c93c21e3c84e4921ababcc6fa54c4e23054064725ddfd1dbe38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:03:51 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4a01159bce429f4e132e9e1304fdcff6cd124a9bf6f8f75f4ccfa94847000cc0-merged.mount: Deactivated successfully.
Nov 23 16:03:51 np0005532761 podman[258894]: 2025-11-23 21:03:51.290511165 +0000 UTC m=+0.073298825 container remove fd41d73a51fb4c93c21e3c84e4921ababcc6fa54c4e23054064725ddfd1dbe38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Nov 23 16:03:51 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:03:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:51 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 16:03:51 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.645s CPU time.
Nov 23 16:03:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:03:51.860 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:03:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:03:51.861 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:03:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:03:51.861 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:03:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:52.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:53.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:54.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:03:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:03:54 np0005532761 podman[259110]: 2025-11-23 21:03:54.959258213 +0000 UTC m=+0.034609988 container create 7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:03:54 np0005532761 systemd[1]: Started libpod-conmon-7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d.scope.
Nov 23 16:03:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:03:55 np0005532761 podman[259110]: 2025-11-23 21:03:55.035146847 +0000 UTC m=+0.110498622 container init 7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 16:03:55 np0005532761 podman[259110]: 2025-11-23 21:03:54.944089747 +0000 UTC m=+0.019441542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:03:55 np0005532761 podman[259110]: 2025-11-23 21:03:55.041718714 +0000 UTC m=+0.117070499 container start 7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 16:03:55 np0005532761 podman[259110]: 2025-11-23 21:03:55.044725524 +0000 UTC m=+0.120077329 container attach 7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 16:03:55 np0005532761 stoic_cannon[259127]: 167 167
Nov 23 16:03:55 np0005532761 systemd[1]: libpod-7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d.scope: Deactivated successfully.
Nov 23 16:03:55 np0005532761 podman[259110]: 2025-11-23 21:03:55.04789157 +0000 UTC m=+0.123243335 container died 7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cannon, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 16:03:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-02aa413d5575b2c3f0f655933b2346f033b91e64afb32899ecd554e81d144a92-merged.mount: Deactivated successfully.
Nov 23 16:03:55 np0005532761 podman[259110]: 2025-11-23 21:03:55.083313039 +0000 UTC m=+0.158664814 container remove 7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 16:03:55 np0005532761 systemd[1]: libpod-conmon-7989928b0310255591765423e8cca4daf9f6d93b7972ab39314e9c75a1186e2d.scope: Deactivated successfully.
Nov 23 16:03:55 np0005532761 podman[259151]: 2025-11-23 21:03:55.219583861 +0000 UTC m=+0.035875442 container create 9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_bhaskara, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:03:55 np0005532761 systemd[1]: Started libpod-conmon-9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7.scope.
Nov 23 16:03:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:03:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b43a0cc047a9e13576867a9fbb2ced19551793b0be589cd57adc986cf645d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b43a0cc047a9e13576867a9fbb2ced19551793b0be589cd57adc986cf645d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b43a0cc047a9e13576867a9fbb2ced19551793b0be589cd57adc986cf645d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b43a0cc047a9e13576867a9fbb2ced19551793b0be589cd57adc986cf645d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b43a0cc047a9e13576867a9fbb2ced19551793b0be589cd57adc986cf645d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:55 np0005532761 podman[259151]: 2025-11-23 21:03:55.29042114 +0000 UTC m=+0.106712741 container init 9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 16:03:55 np0005532761 podman[259151]: 2025-11-23 21:03:55.203344726 +0000 UTC m=+0.019636327 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:03:55 np0005532761 podman[259151]: 2025-11-23 21:03:55.300362406 +0000 UTC m=+0.116653977 container start 9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_bhaskara, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:03:55 np0005532761 podman[259151]: 2025-11-23 21:03:55.304098486 +0000 UTC m=+0.120390057 container attach 9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 16:03:55 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:03:55 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:03:55 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:03:55 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:03:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210355 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:03:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:55.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:55 np0005532761 amazing_bhaskara[259167]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:03:55 np0005532761 amazing_bhaskara[259167]: --> All data devices are unavailable
Nov 23 16:03:55 np0005532761 systemd[1]: libpod-9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7.scope: Deactivated successfully.
Nov 23 16:03:55 np0005532761 podman[259151]: 2025-11-23 21:03:55.615765859 +0000 UTC m=+0.432057430 container died 9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_bhaskara, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 16:03:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-905b43a0cc047a9e13576867a9fbb2ced19551793b0be589cd57adc986cf645d-merged.mount: Deactivated successfully.
Nov 23 16:03:55 np0005532761 podman[259151]: 2025-11-23 21:03:55.835288553 +0000 UTC m=+0.651580134 container remove 9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:03:55 np0005532761 systemd[1]: libpod-conmon-9a0b2799fd79a9c04b9bab062ba2fc328d247db108cb7cefc6e1dd6f1cbcebe7.scope: Deactivated successfully.
Nov 23 16:03:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:56 np0005532761 podman[259284]: 2025-11-23 21:03:56.441183812 +0000 UTC m=+0.025910346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:03:56 np0005532761 podman[259284]: 2025-11-23 21:03:56.774389233 +0000 UTC m=+0.359115677 container create f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 16:03:56 np0005532761 systemd[1]: Started libpod-conmon-f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f.scope.
Nov 23 16:03:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:03:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:03:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:03:57 np0005532761 podman[259284]: 2025-11-23 21:03:57.135382258 +0000 UTC m=+0.720108722 container init f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 16:03:57 np0005532761 podman[259284]: 2025-11-23 21:03:57.142335664 +0000 UTC m=+0.727062098 container start f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Nov 23 16:03:57 np0005532761 adoring_fermi[259302]: 167 167
Nov 23 16:03:57 np0005532761 systemd[1]: libpod-f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f.scope: Deactivated successfully.
Nov 23 16:03:57 np0005532761 podman[259284]: 2025-11-23 21:03:57.250618166 +0000 UTC m=+0.835344650 container attach f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 16:03:57 np0005532761 podman[259284]: 2025-11-23 21:03:57.251456248 +0000 UTC m=+0.836182712 container died f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:03:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:57 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9cc8ed713bf8b4c1563aef901a897c26dad01ac2d1ddf6dfb3f353a85cfc387f-merged.mount: Deactivated successfully.
Nov 23 16:03:57 np0005532761 podman[259284]: 2025-11-23 21:03:57.692774657 +0000 UTC m=+1.277501111 container remove f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:03:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:03:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:03:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:03:57 np0005532761 systemd[1]: libpod-conmon-f44f84ea79ad8d5ffd2c1b84e8addc51b4de91d3afeb3de190e2882413ccdb1f.scope: Deactivated successfully.
Nov 23 16:03:57 np0005532761 podman[259326]: 2025-11-23 21:03:57.896339712 +0000 UTC m=+0.091336608 container create 30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 16:03:57 np0005532761 podman[259326]: 2025-11-23 21:03:57.826944902 +0000 UTC m=+0.021941818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:03:57 np0005532761 systemd[1]: Started libpod-conmon-30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3.scope.
Nov 23 16:03:58 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:03:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94d056b64927e16e1a111571ef2cfa0eee686dc287f8600f8ec841bbb7df681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94d056b64927e16e1a111571ef2cfa0eee686dc287f8600f8ec841bbb7df681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94d056b64927e16e1a111571ef2cfa0eee686dc287f8600f8ec841bbb7df681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94d056b64927e16e1a111571ef2cfa0eee686dc287f8600f8ec841bbb7df681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:58.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:58 np0005532761 podman[259326]: 2025-11-23 21:03:58.05443284 +0000 UTC m=+0.249429756 container init 30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 16:03:58 np0005532761 podman[259326]: 2025-11-23 21:03:58.061467758 +0000 UTC m=+0.256464644 container start 30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_nash, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:03:58 np0005532761 podman[259326]: 2025-11-23 21:03:58.101749868 +0000 UTC m=+0.296746784 container attach 30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]: {
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:    "1": [
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:        {
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "devices": [
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "/dev/loop3"
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            ],
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "lv_name": "ceph_lv0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "lv_size": "21470642176",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "name": "ceph_lv0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "tags": {
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.cluster_name": "ceph",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.crush_device_class": "",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.encrypted": "0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.osd_id": "1",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.type": "block",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.vdo": "0",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:                "ceph.with_tpm": "0"
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            },
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "type": "block",
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:            "vg_name": "ceph_vg0"
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:        }
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]:    ]
Nov 23 16:03:58 np0005532761 peaceful_nash[259342]: }
Nov 23 16:03:58 np0005532761 systemd[1]: libpod-30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3.scope: Deactivated successfully.
Nov 23 16:03:58 np0005532761 podman[259326]: 2025-11-23 21:03:58.358947981 +0000 UTC m=+0.553944877 container died 30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_nash, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:03:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b94d056b64927e16e1a111571ef2cfa0eee686dc287f8600f8ec841bbb7df681-merged.mount: Deactivated successfully.
Nov 23 16:03:58 np0005532761 podman[259326]: 2025-11-23 21:03:58.785948894 +0000 UTC m=+0.980945790 container remove 30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:03:58 np0005532761 systemd[1]: libpod-conmon-30091d9e93369ea658f09f411e95628338f07fac195b4f422a5323e0b8b3d2d3.scope: Deactivated successfully.
Nov 23 16:03:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:03:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:03:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:59.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:59 np0005532761 podman[259458]: 2025-11-23 21:03:59.327919775 +0000 UTC m=+0.022264944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:03:59 np0005532761 podman[259458]: 2025-11-23 21:03:59.452385959 +0000 UTC m=+0.146731158 container create afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_robinson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:03:59 np0005532761 systemd[1]: Started libpod-conmon-afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1.scope.
Nov 23 16:03:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.583 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.583 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.603 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.604 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.604 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:03:59 np0005532761 podman[259458]: 2025-11-23 21:03:59.614573179 +0000 UTC m=+0.308918368 container init afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.621 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.621 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.622 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:59 np0005532761 nova_compute[257263]: 2025-11-23 21:03:59.622 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:59 np0005532761 podman[259458]: 2025-11-23 21:03:59.626601101 +0000 UTC m=+0.320946270 container start afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:03:59 np0005532761 naughty_robinson[259474]: 167 167
Nov 23 16:03:59 np0005532761 systemd[1]: libpod-afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1.scope: Deactivated successfully.
Nov 23 16:03:59 np0005532761 conmon[259474]: conmon afb6d2ddd7044cdaa884 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1.scope/container/memory.events
Nov 23 16:03:59 np0005532761 podman[259458]: 2025-11-23 21:03:59.636133345 +0000 UTC m=+0.330478554 container attach afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 16:03:59 np0005532761 podman[259458]: 2025-11-23 21:03:59.636580727 +0000 UTC m=+0.330925896 container died afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:03:59 np0005532761 systemd[1]: var-lib-containers-storage-overlay-220627ef63a69eb8e2814dea6ac8e25ee8dc89a7a2ac52186a99c2d0a5ef8ecb-merged.mount: Deactivated successfully.
Nov 23 16:03:59 np0005532761 podman[259458]: 2025-11-23 21:03:59.697945806 +0000 UTC m=+0.392290975 container remove afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:03:59 np0005532761 systemd[1]: libpod-conmon-afb6d2ddd7044cdaa884d06c57c4a925d7bd7d1f6bab262068bd563c87fb83f1.scope: Deactivated successfully.
Nov 23 16:03:59 np0005532761 podman[259499]: 2025-11-23 21:03:59.867720588 +0000 UTC m=+0.041169939 container create f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 16:03:59 np0005532761 systemd[1]: Started libpod-conmon-f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1.scope.
Nov 23 16:03:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:03:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c553d91faec5b78ce2703ffe83207d7f221682434b311e6f7658e0a6fc584ae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c553d91faec5b78ce2703ffe83207d7f221682434b311e6f7658e0a6fc584ae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c553d91faec5b78ce2703ffe83207d7f221682434b311e6f7658e0a6fc584ae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c553d91faec5b78ce2703ffe83207d7f221682434b311e6f7658e0a6fc584ae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:03:59 np0005532761 podman[259499]: 2025-11-23 21:03:59.851215738 +0000 UTC m=+0.024665119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:03:59 np0005532761 podman[259499]: 2025-11-23 21:03:59.947956791 +0000 UTC m=+0.121406162 container init f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:03:59 np0005532761 podman[259499]: 2025-11-23 21:03:59.956935841 +0000 UTC m=+0.130385192 container start f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 16:03:59 np0005532761 podman[259499]: 2025-11-23 21:03:59.960779433 +0000 UTC m=+0.134228784 container attach f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:04:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.077 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.077 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.078 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.078 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.078 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:04:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:04:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470100489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.554 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:04:00 np0005532761 lvm[259612]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:04:00 np0005532761 lvm[259612]: VG ceph_vg0 finished
Nov 23 16:04:00 np0005532761 reverent_cartwright[259515]: {}
Nov 23 16:04:00 np0005532761 systemd[1]: libpod-f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1.scope: Deactivated successfully.
Nov 23 16:04:00 np0005532761 systemd[1]: libpod-f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1.scope: Consumed 1.050s CPU time.
Nov 23 16:04:00 np0005532761 podman[259499]: 2025-11-23 21:04:00.66661476 +0000 UTC m=+0.840064111 container died f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:04:00 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c553d91faec5b78ce2703ffe83207d7f221682434b311e6f7658e0a6fc584ae9-merged.mount: Deactivated successfully.
Nov 23 16:04:00 np0005532761 podman[259499]: 2025-11-23 21:04:00.70857877 +0000 UTC m=+0.882028121 container remove f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 16:04:00 np0005532761 systemd[1]: libpod-conmon-f7ffe4f9acfc4ba0b93bc518fda4f6c656ba5b50c59442f603966377ddc677a1.scope: Deactivated successfully.
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.734 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.736 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4884MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.736 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.736 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:04:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:04:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:04:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:04:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.788 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.788 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:04:00 np0005532761 nova_compute[257263]: 2025-11-23 21:04:00.806 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:04:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 16:04:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:04:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54100438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:04:01 np0005532761 nova_compute[257263]: 2025-11-23 21:04:01.267 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:04:01 np0005532761 nova_compute[257263]: 2025-11-23 21:04:01.273 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:04:01 np0005532761 nova_compute[257263]: 2025-11-23 21:04:01.288 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:04:01 np0005532761 nova_compute[257263]: 2025-11-23 21:04:01.290 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:04:01 np0005532761 nova_compute[257263]: 2025-11-23 21:04:01.291 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:04:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:01 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 9.
Nov 23 16:04:01 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:04:01 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.645s CPU time.
Nov 23 16:04:01 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 16:04:01 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:04:01 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:04:01 np0005532761 podman[259722]: 2025-11-23 21:04:01.944999503 +0000 UTC m=+0.042611249 container create 6752ee7cace9198161b6541c11d8398b97af0a4ad1e347a2801cd5b542fe9e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:04:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b34871df0492df8aa06567f55706d00efa7acb7cf3bdb58155e212f0836f0cf/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b34871df0492df8aa06567f55706d00efa7acb7cf3bdb58155e212f0836f0cf/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b34871df0492df8aa06567f55706d00efa7acb7cf3bdb58155e212f0836f0cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b34871df0492df8aa06567f55706d00efa7acb7cf3bdb58155e212f0836f0cf/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:02 np0005532761 podman[259722]: 2025-11-23 21:04:02.004555613 +0000 UTC m=+0.102167359 container init 6752ee7cace9198161b6541c11d8398b97af0a4ad1e347a2801cd5b542fe9e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:04:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:02 np0005532761 podman[259722]: 2025-11-23 21:04:02.016155952 +0000 UTC m=+0.113767698 container start 6752ee7cace9198161b6541c11d8398b97af0a4ad1e347a2801cd5b542fe9e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 16:04:02 np0005532761 podman[259722]: 2025-11-23 21:04:01.925207324 +0000 UTC m=+0.022819100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:04:02 np0005532761 bash[259722]: 6752ee7cace9198161b6541c11d8398b97af0a4ad1e347a2801cd5b542fe9e5f
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 16:04:02 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 16:04:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:04:03
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.rgw.root', 'images', '.nfs', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr']
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:04:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:04:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:04:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:03.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:04:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:04:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210404 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:04:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:04:05 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 16:04:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:05 np0005532761 podman[259809]: 2025-11-23 21:04:05.529527401 +0000 UTC m=+0.049002129 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:04:05 np0005532761 podman[259808]: 2025-11-23 21:04:05.563598731 +0000 UTC m=+0.086816049 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:04:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:06.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:04:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:04:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:04:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:07.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:04:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 16:04:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 16:04:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:08.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:08 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:04:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:08 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:04:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:08 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:04:09 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:04:09.065 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:04:09 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:04:09.066 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:04:09 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:04:09.066 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:04:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:04:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:10 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:04:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:10 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:04:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:10 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:04:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 16:04:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:11.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.839640) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931851839703, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 251, "total_data_size": 4252774, "memory_usage": 4319040, "flush_reason": "Manual Compaction"}
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931851911448, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4132392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19969, "largest_seqno": 22098, "table_properties": {"data_size": 4122798, "index_size": 6024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19855, "raw_average_key_size": 20, "raw_value_size": 4103685, "raw_average_value_size": 4191, "num_data_blocks": 264, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931631, "oldest_key_time": 1763931631, "file_creation_time": 1763931851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 71835 microseconds, and 10749 cpu microseconds.
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.911499) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4132392 bytes OK
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.911521) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.936260) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.936287) EVENT_LOG_v1 {"time_micros": 1763931851936281, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.936307) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4244057, prev total WAL file size 4244057, number of live WAL files 2.
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.937531) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4035KB)], [44(12MB)]
Nov 23 16:04:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931851937590, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17510971, "oldest_snapshot_seqno": -1}
Nov 23 16:04:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5437 keys, 15313403 bytes, temperature: kUnknown
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852484006, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15313403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15274708, "index_size": 23993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137184, "raw_average_key_size": 25, "raw_value_size": 15174002, "raw_average_value_size": 2790, "num_data_blocks": 991, "num_entries": 5437, "num_filter_entries": 5437, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763931851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.484331) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15313403 bytes
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.488865) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.0 rd, 28.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.8 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 5957, records dropped: 520 output_compression: NoCompression
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.488897) EVENT_LOG_v1 {"time_micros": 1763931852488883, "job": 22, "event": "compaction_finished", "compaction_time_micros": 546499, "compaction_time_cpu_micros": 27627, "output_level": 6, "num_output_files": 1, "total_output_size": 15313403, "num_input_records": 5957, "num_output_records": 5437, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852489857, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852492471, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:11.937437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.492544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.492556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.492558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.492560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:04:12.492562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 16:04:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:13.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:14.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 23 16:04:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:15.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:16.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:16 np0005532761 podman[259861]: 2025-11-23 21:04:16.567759946 +0000 UTC m=+0.084702512 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:04:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:16 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:04:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 23 16:04:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:17.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:04:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:17.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:04:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:17.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:04:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:17 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:17.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 16:04:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 16:04:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:18.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:18 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:18 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:04:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:04:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 23 16:04:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210419 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:04:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:19 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:19.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:19 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:04:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:19 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:04:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:20.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:20 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:20 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Nov 23 16:04:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:21 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:22.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:22 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:22 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0001b40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:22 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:04:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 23 16:04:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:23 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:24.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:24 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:24 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210424 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:04:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Nov 23 16:04:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:25 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e4001a90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:25.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:26.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:26 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:26 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc0016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210426 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:04:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 16:04:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:27.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:04:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=cleanup t=2025-11-23T21:04:27.168180881Z level=info msg="Completed cleanup jobs" duration=24.256337ms
Nov 23 16:04:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=plugins.update.checker t=2025-11-23T21:04:27.280751907Z level=info msg="Update check succeeded" duration=78.416654ms
Nov 23 16:04:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafana.update.checker t=2025-11-23T21:04:27.281908628Z level=info msg="Update check succeeded" duration=74.799808ms
Nov 23 16:04:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:27 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:27.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:27] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 23 16:04:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:27] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 23 16:04:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:28.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:28 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:28 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0001b40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 16:04:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:29 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:29.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:30.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:30 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:30 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e4002590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 16:04:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:31 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e4002590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:31.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:32 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:32 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 23 16:04:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:04:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:04:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:04:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:04:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:04:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:04:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:04:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:04:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:33 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:34.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:34 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:34 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:34 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:04:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Nov 23 16:04:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:35 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e4002590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:36.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:36 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:36 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:36 np0005532761 podman[259943]: 2025-11-23 21:04:36.554497519 +0000 UTC m=+0.057203188 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:04:36 np0005532761 podman[259942]: 2025-11-23 21:04:36.569566112 +0000 UTC m=+0.086679606 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:04:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:04:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:37.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:04:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:37.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:04:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:37 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4003490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:04:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:04:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:37 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:04:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:37 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:04:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:38.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:38 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:38 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:04:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:39 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e4002590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:40.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:40 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4003490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:40 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:40 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:04:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:04:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:41 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:42.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:42 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e4002590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:42 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:04:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:43 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:44.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:44 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:44 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 16:04:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:45 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:46.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:46 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:46 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210446 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:04:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:04:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:47.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:04:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:47 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:47.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:47 np0005532761 podman[260026]: 2025-11-23 21:04:47.539700641 +0000 UTC m=+0.058145274 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 16:04:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:04:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:04:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:48.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:04:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:04:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:48 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e4002590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:48 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:04:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:49 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:50 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:50 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:04:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:51 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:51.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:04:51.862 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:04:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:04:51.862 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:04:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:04:51.862 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:04:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:52.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:52 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:52 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:04:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:53 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c0000d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:53.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:54 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:54 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:04:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:55 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:56.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:56 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f4001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:56 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3c0001840 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:04:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:04:57.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:04:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:57 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:57.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:57] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:04:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:04:57] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 23 16:04:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:58.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:58 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3b8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:58 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:04:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:04:59 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:04:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:00.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:00 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:00 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.291 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.291 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.291 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.304 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.304 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.304 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.304 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:00 np0005532761 nova_compute[257263]: 2025-11-23 21:05:00.304 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:01 np0005532761 nova_compute[257263]: 2025-11-23 21:05:01.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:01 np0005532761 nova_compute[257263]: 2025-11-23 21:05:01.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:01 np0005532761 nova_compute[257263]: 2025-11-23 21:05:01.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:05:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:01 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 23 16:05:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.056 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.056 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.056 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.056 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.057 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:05:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:02.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:02 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3b80016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:02 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:05:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500924673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.488 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.683 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.685 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4934MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.686 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.687 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.752 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.753 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:05:02 np0005532761 nova_compute[257263]: 2025-11-23 21:05:02.777 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:05:03
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['images', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.nfs', 'default.rgw.log', 'vms']
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282201788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:05:03 np0005532761 nova_compute[257263]: 2025-11-23 21:05:03.313 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:05:03 np0005532761 nova_compute[257263]: 2025-11-23 21:05:03.320 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:05:03 np0005532761 nova_compute[257263]: 2025-11-23 21:05:03.330 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:05:03 np0005532761 nova_compute[257263]: 2025-11-23 21:05:03.332 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:05:03 np0005532761 nova_compute[257263]: 2025-11-23 21:05:03.332 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:03 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:05:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:05:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 23 16:05:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:04.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:04 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:04 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3b8002160 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:05:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:05:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:05:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:05 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:05:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:05.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:05:05 np0005532761 podman[260312]: 2025-11-23 21:05:05.917582389 +0000 UTC m=+0.043445341 container create b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:05:05 np0005532761 systemd[1]: Started libpod-conmon-b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a.scope.
Nov 23 16:05:05 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:05:05 np0005532761 podman[260312]: 2025-11-23 21:05:05.993616569 +0000 UTC m=+0.119479531 container init b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 16:05:05 np0005532761 podman[260312]: 2025-11-23 21:05:05.901778646 +0000 UTC m=+0.027641618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:06 np0005532761 podman[260312]: 2025-11-23 21:05:06.000905203 +0000 UTC m=+0.126768145 container start b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:05:06 np0005532761 podman[260312]: 2025-11-23 21:05:06.005430664 +0000 UTC m=+0.131293616 container attach b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:05:06 np0005532761 infallible_borg[260328]: 167 167
Nov 23 16:05:06 np0005532761 systemd[1]: libpod-b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a.scope: Deactivated successfully.
Nov 23 16:05:06 np0005532761 podman[260312]: 2025-11-23 21:05:06.007324025 +0000 UTC m=+0.133186987 container died b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 16:05:06 np0005532761 systemd[1]: var-lib-containers-storage-overlay-009143c4c74372f6c89d9fa355470ada0109e52d7e12e4feb919741d38268de4-merged.mount: Deactivated successfully.
Nov 23 16:05:06 np0005532761 podman[260312]: 2025-11-23 21:05:06.044588559 +0000 UTC m=+0.170451511 container remove b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:05:06 np0005532761 systemd[1]: libpod-conmon-b9603eb61b85712abd798cac3112f235e2d9865da97d886f9b24cb454466a28a.scope: Deactivated successfully.
Nov 23 16:05:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:06.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:06 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3cc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:06 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f4000df0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:06 np0005532761 podman[260353]: 2025-11-23 21:05:06.197373989 +0000 UTC m=+0.047312944 container create 8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:05:06 np0005532761 systemd[1]: Started libpod-conmon-8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc.scope.
Nov 23 16:05:06 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:05:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d18f20d07114fcf2a1615e211ccdc2b21a917ec0db4ce7214c65ba00271d27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d18f20d07114fcf2a1615e211ccdc2b21a917ec0db4ce7214c65ba00271d27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d18f20d07114fcf2a1615e211ccdc2b21a917ec0db4ce7214c65ba00271d27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d18f20d07114fcf2a1615e211ccdc2b21a917ec0db4ce7214c65ba00271d27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d18f20d07114fcf2a1615e211ccdc2b21a917ec0db4ce7214c65ba00271d27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:06 np0005532761 podman[260353]: 2025-11-23 21:05:06.173861232 +0000 UTC m=+0.023800277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:06 np0005532761 podman[260353]: 2025-11-23 21:05:06.268981102 +0000 UTC m=+0.118920067 container init 8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 16:05:06 np0005532761 podman[260353]: 2025-11-23 21:05:06.27718738 +0000 UTC m=+0.127126335 container start 8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:05:06 np0005532761 podman[260353]: 2025-11-23 21:05:06.281631549 +0000 UTC m=+0.131570524 container attach 8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:05:06 np0005532761 jolly_almeida[260370]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:05:06 np0005532761 jolly_almeida[260370]: --> All data devices are unavailable
Nov 23 16:05:06 np0005532761 systemd[1]: libpod-8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc.scope: Deactivated successfully.
Nov 23 16:05:06 np0005532761 podman[260387]: 2025-11-23 21:05:06.626800455 +0000 UTC m=+0.027930977 container died 8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:05:06 np0005532761 systemd[1]: var-lib-containers-storage-overlay-82d18f20d07114fcf2a1615e211ccdc2b21a917ec0db4ce7214c65ba00271d27-merged.mount: Deactivated successfully.
Nov 23 16:05:06 np0005532761 podman[260387]: 2025-11-23 21:05:06.668954421 +0000 UTC m=+0.070084923 container remove 8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 16:05:06 np0005532761 systemd[1]: libpod-conmon-8e228bd888c9ab5a11f607a0d194359455ccdd8183030b1f60e5da9119c546bc.scope: Deactivated successfully.
Nov 23 16:05:06 np0005532761 podman[260393]: 2025-11-23 21:05:06.704061268 +0000 UTC m=+0.092993364 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 16:05:06 np0005532761 podman[260386]: 2025-11-23 21:05:06.708310252 +0000 UTC m=+0.097406942 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 16:05:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:07.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:05:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:07.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:05:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:07.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:05:07 np0005532761 podman[260533]: 2025-11-23 21:05:07.217821116 +0000 UTC m=+0.042633539 container create a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:05:07 np0005532761 systemd[1]: Started libpod-conmon-a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693.scope.
Nov 23 16:05:07 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:05:07 np0005532761 podman[260533]: 2025-11-23 21:05:07.201161061 +0000 UTC m=+0.025973514 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:07 np0005532761 podman[260533]: 2025-11-23 21:05:07.301700185 +0000 UTC m=+0.126512628 container init a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:05:07 np0005532761 podman[260533]: 2025-11-23 21:05:07.308844416 +0000 UTC m=+0.133656839 container start a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:05:07 np0005532761 podman[260533]: 2025-11-23 21:05:07.313188592 +0000 UTC m=+0.138001035 container attach a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_taussig, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 16:05:07 np0005532761 thirsty_taussig[260550]: 167 167
Nov 23 16:05:07 np0005532761 systemd[1]: libpod-a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693.scope: Deactivated successfully.
Nov 23 16:05:07 np0005532761 podman[260533]: 2025-11-23 21:05:07.31685943 +0000 UTC m=+0.141671853 container died a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Nov 23 16:05:07 np0005532761 systemd[1]: var-lib-containers-storage-overlay-58886091c3cc57e7a4dfd3f099ec836e30355f020e7ff5bba3a41305eb9c7944-merged.mount: Deactivated successfully.
Nov 23 16:05:07 np0005532761 podman[260533]: 2025-11-23 21:05:07.357398773 +0000 UTC m=+0.182211206 container remove a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:05:07 np0005532761 systemd[1]: libpod-conmon-a35407512db91e020fbfb95186ff3a9a58070858dbb6928ef04e8d22adaf4693.scope: Deactivated successfully.
Nov 23 16:05:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:07 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3b8002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:07.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:07 np0005532761 podman[260576]: 2025-11-23 21:05:07.57379003 +0000 UTC m=+0.084189149 container create 4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 16:05:07 np0005532761 systemd[1]: Started libpod-conmon-4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da.scope.
Nov 23 16:05:07 np0005532761 podman[260576]: 2025-11-23 21:05:07.554936167 +0000 UTC m=+0.065335296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:07 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:05:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0beed59c137f7697eae0b19cdcf570d48001a169504477466e3c4b8aa6eb7b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0beed59c137f7697eae0b19cdcf570d48001a169504477466e3c4b8aa6eb7b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0beed59c137f7697eae0b19cdcf570d48001a169504477466e3c4b8aa6eb7b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0beed59c137f7697eae0b19cdcf570d48001a169504477466e3c4b8aa6eb7b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:07 np0005532761 podman[260576]: 2025-11-23 21:05:07.676703398 +0000 UTC m=+0.187102527 container init 4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:05:07 np0005532761 podman[260576]: 2025-11-23 21:05:07.682730819 +0000 UTC m=+0.193129928 container start 4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:05:07 np0005532761 podman[260576]: 2025-11-23 21:05:07.685841083 +0000 UTC m=+0.196240192 container attach 4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:05:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:07] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 16:05:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:07] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 16:05:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:05:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806054957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:05:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:05:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806054957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]: {
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:    "1": [
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:        {
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "devices": [
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "/dev/loop3"
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            ],
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "lv_name": "ceph_lv0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "lv_size": "21470642176",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "name": "ceph_lv0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "tags": {
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.cluster_name": "ceph",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.crush_device_class": "",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.encrypted": "0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.osd_id": "1",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.type": "block",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.vdo": "0",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:                "ceph.with_tpm": "0"
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            },
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "type": "block",
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:            "vg_name": "ceph_vg0"
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:        }
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]:    ]
Nov 23 16:05:07 np0005532761 xenodochial_bartik[260592]: }
Nov 23 16:05:07 np0005532761 systemd[1]: libpod-4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da.scope: Deactivated successfully.
Nov 23 16:05:07 np0005532761 podman[260576]: 2025-11-23 21:05:07.98462345 +0000 UTC m=+0.495022569 container died 4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 16:05:08 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c0beed59c137f7697eae0b19cdcf570d48001a169504477466e3c4b8aa6eb7b8-merged.mount: Deactivated successfully.
Nov 23 16:05:08 np0005532761 podman[260576]: 2025-11-23 21:05:08.025955583 +0000 UTC m=+0.536354692 container remove 4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:05:08 np0005532761 systemd[1]: libpod-conmon-4f11464cb28f1d77768826d8ec8fd04cfef9f9ff677fefa5c6d5e85aa4cc64da.scope: Deactivated successfully.
Nov 23 16:05:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:08.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:08 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3d8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:08 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:08 np0005532761 podman[260705]: 2025-11-23 21:05:08.586826879 +0000 UTC m=+0.042570157 container create 2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_buck, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 16:05:08 np0005532761 systemd[1]: Started libpod-conmon-2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0.scope.
Nov 23 16:05:08 np0005532761 podman[260705]: 2025-11-23 21:05:08.567534784 +0000 UTC m=+0.023278092 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:08 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:05:08 np0005532761 podman[260705]: 2025-11-23 21:05:08.681012474 +0000 UTC m=+0.136755772 container init 2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Nov 23 16:05:08 np0005532761 podman[260705]: 2025-11-23 21:05:08.688912985 +0000 UTC m=+0.144656283 container start 2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:05:08 np0005532761 podman[260705]: 2025-11-23 21:05:08.693317592 +0000 UTC m=+0.149060890 container attach 2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:05:08 np0005532761 systemd[1]: libpod-2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0.scope: Deactivated successfully.
Nov 23 16:05:08 np0005532761 lucid_buck[260721]: 167 167
Nov 23 16:05:08 np0005532761 conmon[260721]: conmon 2d2864c9eb3d922afaf6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0.scope/container/memory.events
Nov 23 16:05:08 np0005532761 podman[260705]: 2025-11-23 21:05:08.696649771 +0000 UTC m=+0.152393049 container died 2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_buck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 23 16:05:08 np0005532761 systemd[1]: var-lib-containers-storage-overlay-717aabdd6a301ce2932009a746bc4e17aad88e8d2d7f34ee21fbd1607164c142-merged.mount: Deactivated successfully.
Nov 23 16:05:08 np0005532761 podman[260705]: 2025-11-23 21:05:08.730581518 +0000 UTC m=+0.186324786 container remove 2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:05:08 np0005532761 systemd[1]: libpod-conmon-2d2864c9eb3d922afaf67d9e91a07eb50e22bbe0ff6013dc5a4075bfb75375f0.scope: Deactivated successfully.
Nov 23 16:05:08 np0005532761 podman[260743]: 2025-11-23 21:05:08.901156081 +0000 UTC m=+0.049279936 container create 747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 16:05:08 np0005532761 systemd[1]: Started libpod-conmon-747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5.scope.
Nov 23 16:05:08 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:05:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/251ac36974365e194e5577aadc2ebfeac974c4c4f461546afe731e131be533ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/251ac36974365e194e5577aadc2ebfeac974c4c4f461546afe731e131be533ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/251ac36974365e194e5577aadc2ebfeac974c4c4f461546afe731e131be533ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:08 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/251ac36974365e194e5577aadc2ebfeac974c4c4f461546afe731e131be533ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:08 np0005532761 podman[260743]: 2025-11-23 21:05:08.885981877 +0000 UTC m=+0.034105732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:08 np0005532761 podman[260743]: 2025-11-23 21:05:08.991403841 +0000 UTC m=+0.139527686 container init 747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:05:09 np0005532761 podman[260743]: 2025-11-23 21:05:09.005779605 +0000 UTC m=+0.153903440 container start 747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 23 16:05:09 np0005532761 podman[260743]: 2025-11-23 21:05:09.009365231 +0000 UTC m=+0.157489066 container attach 747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:05:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:09 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:09 np0005532761 lvm[260835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:05:09 np0005532761 lvm[260835]: VG ceph_vg0 finished
Nov 23 16:05:09 np0005532761 brave_banzai[260760]: {}
Nov 23 16:05:09 np0005532761 systemd[1]: libpod-747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5.scope: Deactivated successfully.
Nov 23 16:05:09 np0005532761 podman[260743]: 2025-11-23 21:05:09.675349093 +0000 UTC m=+0.823472938 container died 747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:05:09 np0005532761 systemd[1]: libpod-747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5.scope: Consumed 1.071s CPU time.
Nov 23 16:05:09 np0005532761 systemd[1]: var-lib-containers-storage-overlay-251ac36974365e194e5577aadc2ebfeac974c4c4f461546afe731e131be533ec-merged.mount: Deactivated successfully.
Nov 23 16:05:09 np0005532761 podman[260743]: 2025-11-23 21:05:09.72467412 +0000 UTC m=+0.872797965 container remove 747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 16:05:09 np0005532761 systemd[1]: libpod-conmon-747f2aa59d2af748f47bde59591ed62c85e3baf46711c5428a3d6750c40d71e5.scope: Deactivated successfully.
Nov 23 16:05:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:05:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:05:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:10.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:10 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f40095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:10 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:11 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3f40095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:11.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:12.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:12 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3b8002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[259737]: 23/11/2025 21:05:12 : epoch 692376c2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3e80021d0 fd 49 proxy ignored for local
Nov 23 16:05:12 np0005532761 kernel: ganesha.nfsd[259885]: segfault at 50 ip 00007fa4a047532e sp 00007fa466ffc210 error 4 in libntirpc.so.5.8[7fa4a045a000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 23 16:05:12 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:05:12 np0005532761 systemd[1]: Started Process Core Dump (PID 260878/UID 0).
Nov 23 16:05:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v706: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:13 np0005532761 systemd-coredump[260879]: Process 259741 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007fa4a047532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:05:13 np0005532761 systemd[1]: systemd-coredump@9-260878-0.service: Deactivated successfully.
Nov 23 16:05:13 np0005532761 systemd[1]: systemd-coredump@9-260878-0.service: Consumed 1.089s CPU time.
Nov 23 16:05:13 np0005532761 podman[260888]: 2025-11-23 21:05:13.438505141 +0000 UTC m=+0.027970018 container died 6752ee7cace9198161b6541c11d8398b97af0a4ad1e347a2801cd5b542fe9e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:05:13 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8b34871df0492df8aa06567f55706d00efa7acb7cf3bdb58155e212f0836f0cf-merged.mount: Deactivated successfully.
Nov 23 16:05:13 np0005532761 podman[260888]: 2025-11-23 21:05:13.481894729 +0000 UTC m=+0.071359596 container remove 6752ee7cace9198161b6541c11d8398b97af0a4ad1e347a2801cd5b542fe9e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:05:13 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:05:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:13.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:13 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 16:05:13 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.395s CPU time.
Nov 23 16:05:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:14.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 16:05:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:15.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:16.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:17.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:05:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:05:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:17.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:05:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:17] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 16:05:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:17] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Nov 23 16:05:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:18.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:05:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:05:18 np0005532761 podman[260935]: 2025-11-23 21:05:18.535837332 +0000 UTC m=+0.056672865 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:05:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210519 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:05:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:19.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:20.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:21.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 16:05:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:23.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:23 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 10.
Nov 23 16:05:23 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:05:23 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.395s CPU time.
Nov 23 16:05:23 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 16:05:23 np0005532761 podman[261038]: 2025-11-23 21:05:23.94130404 +0000 UTC m=+0.050183370 container create eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 16:05:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2127ce28d8e97f32fa63c22bedb0778b8ce3d7eb2ffb32e6379ddeb1b1031f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2127ce28d8e97f32fa63c22bedb0778b8ce3d7eb2ffb32e6379ddeb1b1031f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2127ce28d8e97f32fa63c22bedb0778b8ce3d7eb2ffb32e6379ddeb1b1031f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2127ce28d8e97f32fa63c22bedb0778b8ce3d7eb2ffb32e6379ddeb1b1031f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:24 np0005532761 podman[261038]: 2025-11-23 21:05:24.006921372 +0000 UTC m=+0.115800692 container init eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:05:24 np0005532761 podman[261038]: 2025-11-23 21:05:24.012929673 +0000 UTC m=+0.121808983 container start eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 16:05:24 np0005532761 podman[261038]: 2025-11-23 21:05:23.918393428 +0000 UTC m=+0.027272768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:24 np0005532761 bash[261038]: eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee
Nov 23 16:05:24 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 16:05:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:05:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:05:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:25.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:05:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:27.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:05:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:27.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:27] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 16:05:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:27] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 23 16:05:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:28.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:05:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:29.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:30.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:05:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:05:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 16:05:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:31.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:32.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 16:05:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:05:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:05:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:05:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:05:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:05:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:05:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:05:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:05:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:33.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:34.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 16:05:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:35.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:36.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fabb4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 16:05:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:37.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:05:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:37.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:05:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:37 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94000e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:37.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:37 np0005532761 podman[261130]: 2025-11-23 21:05:37.549894356 +0000 UTC m=+0.065390108 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 16:05:37 np0005532761 podman[261129]: 2025-11-23 21:05:37.592999967 +0000 UTC m=+0.108722774 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 23 16:05:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:05:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:05:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:38.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90000fa0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210538 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:05:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 16:05:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210539 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:05:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:39 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47785d0 =====
Nov 23 16:05:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47785d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47785d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:39.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=404 latency=1.027026653s ======
Nov 23 16:05:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:38.923 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=1.027026653s
Nov 23 16:05:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - - [23/Nov/2025:21:05:39.966 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Nov 23 16:05:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 16:05:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:41 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:41.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:42.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94001940 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 16:05:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:43 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:43.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:44.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900023e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.368367) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945368408, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1075, "num_deletes": 251, "total_data_size": 1865579, "memory_usage": 1893368, "flush_reason": "Manual Compaction"}
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945377034, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1182461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22099, "largest_seqno": 23173, "table_properties": {"data_size": 1178229, "index_size": 1820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10920, "raw_average_key_size": 20, "raw_value_size": 1169096, "raw_average_value_size": 2189, "num_data_blocks": 78, "num_entries": 534, "num_filter_entries": 534, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931853, "oldest_key_time": 1763931853, "file_creation_time": 1763931945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8711 microseconds, and 4086 cpu microseconds.
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.377078) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1182461 bytes OK
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.377096) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.378433) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.378450) EVENT_LOG_v1 {"time_micros": 1763931945378446, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.378468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1860671, prev total WAL file size 1860671, number of live WAL files 2.
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.379375) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1154KB)], [47(14MB)]
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945379440, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16495864, "oldest_snapshot_seqno": -1}
Nov 23 16:05:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:45 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94001940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5488 keys, 13036698 bytes, temperature: kUnknown
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945532790, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 13036698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13000945, "index_size": 20923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138543, "raw_average_key_size": 25, "raw_value_size": 12902691, "raw_average_value_size": 2351, "num_data_blocks": 856, "num_entries": 5488, "num_filter_entries": 5488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763931945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.533091) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13036698 bytes
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.534329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.5 rd, 85.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.6 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(25.0) write-amplify(11.0) OK, records in: 5971, records dropped: 483 output_compression: NoCompression
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.534359) EVENT_LOG_v1 {"time_micros": 1763931945534346, "job": 24, "event": "compaction_finished", "compaction_time_micros": 153456, "compaction_time_cpu_micros": 25847, "output_level": 6, "num_output_files": 1, "total_output_size": 13036698, "num_input_records": 5971, "num_output_records": 5488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945534890, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945539903, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.379269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.539949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.539957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.539960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.539963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:05:45.539966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:45.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:46.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 23 16:05:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 23 16:05:46 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 23 16:05:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:05:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:47.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:05:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 23 16:05:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:05:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 23 16:05:47 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 23 16:05:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:47.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:05:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:05:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:05:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:05:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94001940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:05:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 23 16:05:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 23 16:05:49 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 23 16:05:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:49 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:49 np0005532761 podman[261213]: 2025-11-23 21:05:49.537544812 +0000 UTC m=+0.055927215 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 23 16:05:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:49.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:50.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:05:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:05:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 6.9 MiB/s wr, 62 op/s
Nov 23 16:05:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:51 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:51.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:05:51.863 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:05:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:05:51.863 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:05:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:05:51.864 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:05:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:52.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84002c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.9 MiB/s wr, 53 op/s
Nov 23 16:05:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:05:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:54.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840035b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.1 MiB/s wr, 52 op/s
Nov 23 16:05:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 23 16:05:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 23 16:05:55 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 23 16:05:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:55 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:55.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:56.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.1 MiB/s wr, 52 op/s
Nov 23 16:05:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:05:57.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:05:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:57 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:57.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:05:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:05:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 23 16:05:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:58.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210559 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:05:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.9 MiB/s wr, 31 op/s
Nov 23 16:05:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:05:59 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:05:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:00 np0005532761 nova_compute[257263]: 2025-11-23 21:06:00.333 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:00 np0005532761 nova_compute[257263]: 2025-11-23 21:06:00.333 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:06:00 np0005532761 nova_compute[257263]: 2025-11-23 21:06:00.333 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:06:00 np0005532761 nova_compute[257263]: 2025-11-23 21:06:00.343 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:06:00 np0005532761 nova_compute[257263]: 2025-11-23 21:06:00.344 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:00 np0005532761 nova_compute[257263]: 2025-11-23 21:06:00.344 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:01 np0005532761 nova_compute[257263]: 2025-11-23 21:06:01.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:01 np0005532761 nova_compute[257263]: 2025-11-23 21:06:01.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:01 np0005532761 nova_compute[257263]: 2025-11-23 21:06:01.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:01 np0005532761 nova_compute[257263]: 2025-11-23 21:06:01.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:06:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 818 B/s wr, 5 op/s
Nov 23 16:06:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:01 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:01.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.054 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.087 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.088 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.089 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.089 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.089 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:02.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:06:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266415326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.557 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.751 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.752 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.752 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.752 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.830 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.830 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:06:02 np0005532761 nova_compute[257263]: 2025-11-23 21:06:02.854 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 818 B/s wr, 5 op/s
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:06:03
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.nfs']
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:06:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:06:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:06:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:06:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2158962747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:06:03 np0005532761 nova_compute[257263]: 2025-11-23 21:06:03.321 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:03 np0005532761 nova_compute[257263]: 2025-11-23 21:06:03.326 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:06:03 np0005532761 nova_compute[257263]: 2025-11-23 21:06:03.338 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:06:03 np0005532761 nova_compute[257263]: 2025-11-23 21:06:03.340 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:06:03 np0005532761 nova_compute[257263]: 2025-11-23 21:06:03.341 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:03 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:06:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:06:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:03.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:04.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:04 np0005532761 nova_compute[257263]: 2025-11-23 21:06:04.340 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:04 np0005532761 nova_compute[257263]: 2025-11-23 21:06:04.340 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:06:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:05 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:05.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:06.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:06:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:07.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:06:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:07.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:06:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:07.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:06:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:07 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:07] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:07] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:08.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:08 np0005532761 podman[261323]: 2025-11-23 21:06:08.565502175 +0000 UTC m=+0.067521953 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 23 16:06:08 np0005532761 podman[261322]: 2025-11-23 21:06:08.565537167 +0000 UTC m=+0.083916192 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 23 16:06:09 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:06:09.031 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:06:09 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:06:09.032 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:06:09 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:06:09.033 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:06:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:09 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:09.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:10.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:10 np0005532761 podman[261494]: 2025-11-23 21:06:10.9443474 +0000 UTC m=+0.073628388 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:06:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210611 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:06:11 np0005532761 podman[261494]: 2025-11-23 21:06:11.097230442 +0000 UTC m=+0.226511430 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:06:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 16:06:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:11.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:11 np0005532761 podman[261614]: 2025-11-23 21:06:11.650508172 +0000 UTC m=+0.060987772 container exec c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:06:11 np0005532761 podman[261614]: 2025-11-23 21:06:11.661211936 +0000 UTC m=+0.071691486 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:06:12 np0005532761 podman[261705]: 2025-11-23 21:06:12.081972746 +0000 UTC m=+0.072617061 container exec eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:06:12 np0005532761 podman[261705]: 2025-11-23 21:06:12.097119978 +0000 UTC m=+0.087764303 container exec_died eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Nov 23 16:06:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:12.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:12 np0005532761 podman[261769]: 2025-11-23 21:06:12.363199768 +0000 UTC m=+0.059659067 container exec cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 16:06:12 np0005532761 podman[261769]: 2025-11-23 21:06:12.375373962 +0000 UTC m=+0.071833171 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 16:06:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:12 np0005532761 podman[261836]: 2025-11-23 21:06:12.583907651 +0000 UTC m=+0.051297373 container exec 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, description=keepalived for Ceph, io.buildah.version=1.28.2, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vendor=Red Hat, Inc., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 16:06:12 np0005532761 podman[261836]: 2025-11-23 21:06:12.596197088 +0000 UTC m=+0.063586790 container exec_died 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 16:06:12 np0005532761 podman[261901]: 2025-11-23 21:06:12.798475083 +0000 UTC m=+0.050996707 container exec 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:06:12 np0005532761 podman[261901]: 2025-11-23 21:06:12.819855231 +0000 UTC m=+0.072376855 container exec_died 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:06:13 np0005532761 podman[261973]: 2025-11-23 21:06:13.022544306 +0000 UTC m=+0.064095784 container exec 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 16:06:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:06:13 np0005532761 podman[261973]: 2025-11-23 21:06:13.248179531 +0000 UTC m=+0.289730989 container exec_died 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 16:06:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:13 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:13 np0005532761 podman[262089]: 2025-11-23 21:06:13.639869028 +0000 UTC m=+0.069628551 container exec 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:06:13 np0005532761 podman[262089]: 2025-11-23 21:06:13.679154872 +0000 UTC m=+0.108914395 container exec_died 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:06:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:06:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:06:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:14.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:06:14 np0005532761 podman[262304]: 2025-11-23 21:06:14.985232684 +0000 UTC m=+0.043723283 container create e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 16:06:15 np0005532761 systemd[1]: Started libpod-conmon-e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173.scope.
Nov 23 16:06:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:06:15 np0005532761 podman[262304]: 2025-11-23 21:06:15.061576632 +0000 UTC m=+0.120067251 container init e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Nov 23 16:06:15 np0005532761 podman[262304]: 2025-11-23 21:06:14.968254822 +0000 UTC m=+0.026745421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:06:15 np0005532761 podman[262304]: 2025-11-23 21:06:15.067751706 +0000 UTC m=+0.126242305 container start e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_carson, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:06:15 np0005532761 podman[262304]: 2025-11-23 21:06:15.071891516 +0000 UTC m=+0.130382145 container attach e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_carson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:06:15 np0005532761 kind_carson[262321]: 167 167
Nov 23 16:06:15 np0005532761 systemd[1]: libpod-e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173.scope: Deactivated successfully.
Nov 23 16:06:15 np0005532761 conmon[262321]: conmon e011c66c7c06144bb080 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173.scope/container/memory.events
Nov 23 16:06:15 np0005532761 podman[262304]: 2025-11-23 21:06:15.075074101 +0000 UTC m=+0.133564690 container died e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:06:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-421600b3eeeee67edea56763125458c4819767a5e0f5c12654868d0a3296bc80-merged.mount: Deactivated successfully.
Nov 23 16:06:15 np0005532761 podman[262304]: 2025-11-23 21:06:15.117252792 +0000 UTC m=+0.175743391 container remove e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_carson, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:06:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 16:06:15 np0005532761 systemd[1]: libpod-conmon-e011c66c7c06144bb080ba7b3cbb666cac2d542ed838c7541dfd18d4ee593173.scope: Deactivated successfully.
Nov 23 16:06:15 np0005532761 podman[262343]: 2025-11-23 21:06:15.267845032 +0000 UTC m=+0.043095475 container create e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:06:15 np0005532761 systemd[1]: Started libpod-conmon-e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc.scope.
Nov 23 16:06:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:06:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ff05d626d8cbe358cedfef675325aa54006c4cf252be1117fcb18c883c302a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ff05d626d8cbe358cedfef675325aa54006c4cf252be1117fcb18c883c302a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ff05d626d8cbe358cedfef675325aa54006c4cf252be1117fcb18c883c302a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ff05d626d8cbe358cedfef675325aa54006c4cf252be1117fcb18c883c302a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ff05d626d8cbe358cedfef675325aa54006c4cf252be1117fcb18c883c302a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:15 np0005532761 podman[262343]: 2025-11-23 21:06:15.333689621 +0000 UTC m=+0.108940084 container init e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_villani, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 23 16:06:15 np0005532761 podman[262343]: 2025-11-23 21:06:15.342438565 +0000 UTC m=+0.117689008 container start e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:06:15 np0005532761 podman[262343]: 2025-11-23 21:06:15.346021309 +0000 UTC m=+0.121271762 container attach e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 23 16:06:15 np0005532761 podman[262343]: 2025-11-23 21:06:15.251364555 +0000 UTC m=+0.026615008 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:06:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:15 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:06:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:15.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:06:15 np0005532761 bold_villani[262360]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:06:15 np0005532761 bold_villani[262360]: --> All data devices are unavailable
Nov 23 16:06:15 np0005532761 systemd[1]: libpod-e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc.scope: Deactivated successfully.
Nov 23 16:06:15 np0005532761 podman[262343]: 2025-11-23 21:06:15.684559164 +0000 UTC m=+0.459809607 container died e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_villani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:06:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-61ff05d626d8cbe358cedfef675325aa54006c4cf252be1117fcb18c883c302a-merged.mount: Deactivated successfully.
Nov 23 16:06:15 np0005532761 podman[262343]: 2025-11-23 21:06:15.72015301 +0000 UTC m=+0.495403453 container remove e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_villani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:06:15 np0005532761 systemd[1]: libpod-conmon-e0aa89c5b84768dc9ae78387a1f09f5ee6e22abed613a2b47eeb657aa5c377bc.scope: Deactivated successfully.
Nov 23 16:06:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:16.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:16 np0005532761 podman[262480]: 2025-11-23 21:06:16.226213106 +0000 UTC m=+0.034111067 container create 0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 16:06:16 np0005532761 systemd[1]: Started libpod-conmon-0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c.scope.
Nov 23 16:06:16 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:06:16 np0005532761 podman[262480]: 2025-11-23 21:06:16.299563225 +0000 UTC m=+0.107461206 container init 0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_visvesvaraya, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:06:16 np0005532761 podman[262480]: 2025-11-23 21:06:16.306425797 +0000 UTC m=+0.114323768 container start 0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:06:16 np0005532761 podman[262480]: 2025-11-23 21:06:16.212515862 +0000 UTC m=+0.020413843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:06:16 np0005532761 focused_visvesvaraya[262496]: 167 167
Nov 23 16:06:16 np0005532761 systemd[1]: libpod-0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c.scope: Deactivated successfully.
Nov 23 16:06:16 np0005532761 podman[262480]: 2025-11-23 21:06:16.314244445 +0000 UTC m=+0.122142436 container attach 0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 16:06:16 np0005532761 podman[262480]: 2025-11-23 21:06:16.314868121 +0000 UTC m=+0.122766072 container died 0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 16:06:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9b61ca94174f052ba51397e048e1112901af02ebe8f21af4725be4dfe39be412-merged.mount: Deactivated successfully.
Nov 23 16:06:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:16 np0005532761 podman[262480]: 2025-11-23 21:06:16.416520432 +0000 UTC m=+0.224418393 container remove 0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:06:16 np0005532761 systemd[1]: libpod-conmon-0d0d8632d6c449301542c8a98fb3f874f59b67a7e121d8cf312e8758caf1324c.scope: Deactivated successfully.
Nov 23 16:06:16 np0005532761 podman[262521]: 2025-11-23 21:06:16.60538118 +0000 UTC m=+0.055659550 container create ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 16:06:16 np0005532761 systemd[1]: Started libpod-conmon-ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a.scope.
Nov 23 16:06:16 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:06:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbe1c8430c3246d92ec2ef744559ae2f4dc04faa7f44bcca7cb13c45d689b90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532761 podman[262521]: 2025-11-23 21:06:16.587816453 +0000 UTC m=+0.038094843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:06:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbe1c8430c3246d92ec2ef744559ae2f4dc04faa7f44bcca7cb13c45d689b90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbe1c8430c3246d92ec2ef744559ae2f4dc04faa7f44bcca7cb13c45d689b90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbe1c8430c3246d92ec2ef744559ae2f4dc04faa7f44bcca7cb13c45d689b90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532761 podman[262521]: 2025-11-23 21:06:16.701653268 +0000 UTC m=+0.151931658 container init ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:06:16 np0005532761 podman[262521]: 2025-11-23 21:06:16.712091095 +0000 UTC m=+0.162369465 container start ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:06:16 np0005532761 podman[262521]: 2025-11-23 21:06:16.716227235 +0000 UTC m=+0.166505625 container attach ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ritchie, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]: {
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:    "1": [
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:        {
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "devices": [
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "/dev/loop3"
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            ],
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "lv_name": "ceph_lv0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "lv_size": "21470642176",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "name": "ceph_lv0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "tags": {
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.cluster_name": "ceph",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.crush_device_class": "",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.encrypted": "0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.osd_id": "1",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.type": "block",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.vdo": "0",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:                "ceph.with_tpm": "0"
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            },
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "type": "block",
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:            "vg_name": "ceph_vg0"
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:        }
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]:    ]
Nov 23 16:06:16 np0005532761 nifty_ritchie[262537]: }
Nov 23 16:06:16 np0005532761 systemd[1]: libpod-ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a.scope: Deactivated successfully.
Nov 23 16:06:16 np0005532761 podman[262521]: 2025-11-23 21:06:16.996550343 +0000 UTC m=+0.446828713 container died ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:06:17 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5fbe1c8430c3246d92ec2ef744559ae2f4dc04faa7f44bcca7cb13c45d689b90-merged.mount: Deactivated successfully.
Nov 23 16:06:17 np0005532761 podman[262521]: 2025-11-23 21:06:17.036556097 +0000 UTC m=+0.486834467 container remove ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ritchie, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:06:17 np0005532761 systemd[1]: libpod-conmon-ac72a85fb01a882a7cf5167e961120a6bf7f06f321937ce9c0d35d73f0ab0c1a.scope: Deactivated successfully.
Nov 23 16:06:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:06:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:17.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:06:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:17 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:17.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:17 np0005532761 podman[262651]: 2025-11-23 21:06:17.629533631 +0000 UTC m=+0.073006180 container create 7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Nov 23 16:06:17 np0005532761 systemd[1]: Started libpod-conmon-7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd.scope.
Nov 23 16:06:17 np0005532761 podman[262651]: 2025-11-23 21:06:17.596853463 +0000 UTC m=+0.040326082 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:06:17 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:06:17 np0005532761 podman[262651]: 2025-11-23 21:06:17.721283809 +0000 UTC m=+0.164756338 container init 7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:06:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:17] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:17] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:17 np0005532761 podman[262651]: 2025-11-23 21:06:17.730221017 +0000 UTC m=+0.173693526 container start 7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 16:06:17 np0005532761 podman[262651]: 2025-11-23 21:06:17.734190752 +0000 UTC m=+0.177663251 container attach 7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 16:06:17 np0005532761 gallant_rhodes[262667]: 167 167
Nov 23 16:06:17 np0005532761 systemd[1]: libpod-7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd.scope: Deactivated successfully.
Nov 23 16:06:17 np0005532761 podman[262672]: 2025-11-23 21:06:17.774862093 +0000 UTC m=+0.024479922 container died 7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:06:17 np0005532761 systemd[1]: var-lib-containers-storage-overlay-90677503b8f49a1d59f521a7370b16fcd7c5b1f9ff78c4b14a7713e003628edf-merged.mount: Deactivated successfully.
Nov 23 16:06:17 np0005532761 podman[262672]: 2025-11-23 21:06:17.807903411 +0000 UTC m=+0.057521220 container remove 7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Nov 23 16:06:17 np0005532761 systemd[1]: libpod-conmon-7800b04313c7d84baa4cc5ea5fa5de06d8e57c3bf80506e9f15395e9fea8b5cd.scope: Deactivated successfully.
Nov 23 16:06:18 np0005532761 podman[262694]: 2025-11-23 21:06:18.012917668 +0000 UTC m=+0.059703328 container create 860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:06:18 np0005532761 systemd[1]: Started libpod-conmon-860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7.scope.
Nov 23 16:06:18 np0005532761 podman[262694]: 2025-11-23 21:06:17.980458555 +0000 UTC m=+0.027244315 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:06:18 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:06:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c0fdcc435aec776e3b906c364a7a512e3a1f8edc2cf20a6d796662c080743e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c0fdcc435aec776e3b906c364a7a512e3a1f8edc2cf20a6d796662c080743e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c0fdcc435aec776e3b906c364a7a512e3a1f8edc2cf20a6d796662c080743e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:18 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c0fdcc435aec776e3b906c364a7a512e3a1f8edc2cf20a6d796662c080743e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:18 np0005532761 podman[262694]: 2025-11-23 21:06:18.114365573 +0000 UTC m=+0.161151263 container init 860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatelet, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:06:18 np0005532761 podman[262694]: 2025-11-23 21:06:18.120488886 +0000 UTC m=+0.167274546 container start 860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatelet, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 16:06:18 np0005532761 podman[262694]: 2025-11-23 21:06:18.124594045 +0000 UTC m=+0.171380025 container attach 860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatelet, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 16:06:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:18.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:06:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:06:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:18 np0005532761 lvm[262785]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:06:18 np0005532761 lvm[262785]: VG ceph_vg0 finished
Nov 23 16:06:18 np0005532761 jovial_chatelet[262710]: {}
Nov 23 16:06:18 np0005532761 systemd[1]: libpod-860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7.scope: Deactivated successfully.
Nov 23 16:06:18 np0005532761 systemd[1]: libpod-860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7.scope: Consumed 1.084s CPU time.
Nov 23 16:06:18 np0005532761 podman[262789]: 2025-11-23 21:06:18.884447543 +0000 UTC m=+0.024626624 container died 860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:06:18 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c4c0fdcc435aec776e3b906c364a7a512e3a1f8edc2cf20a6d796662c080743e-merged.mount: Deactivated successfully.
Nov 23 16:06:18 np0005532761 podman[262789]: 2025-11-23 21:06:18.925927756 +0000 UTC m=+0.066106837 container remove 860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:06:18 np0005532761 systemd[1]: libpod-conmon-860cb56849b4adaccd9eaae36f68a9a83482559668dbc493413a19649d59f2d7.scope: Deactivated successfully.
Nov 23 16:06:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:06:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:06:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 16:06:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:06:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:19.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:19 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:19 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:20.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:20 np0005532761 podman[262831]: 2025-11-23 21:06:20.556727136 +0000 UTC m=+0.074732397 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:06:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Nov 23 16:06:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:21 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:21.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:22.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:06:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:06:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Nov 23 16:06:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:23 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:23.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:24.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:06:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:06:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:25.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:27.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:06:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:06:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:27 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:27.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:27] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:27] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 16:06:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:29 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:29.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210631 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:06:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 16:06:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:31 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:31.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Nov 23 16:06:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:06:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:06:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:06:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:06:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:06:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:06:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:06:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:06:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:33 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:33.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 767 B/s wr, 9 op/s
Nov 23 16:06:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:35 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.489881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995489973, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 751, "num_deletes": 255, "total_data_size": 1122860, "memory_usage": 1137248, "flush_reason": "Manual Compaction"}
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995519990, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1093658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23174, "largest_seqno": 23924, "table_properties": {"data_size": 1089759, "index_size": 1679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8609, "raw_average_key_size": 18, "raw_value_size": 1081741, "raw_average_value_size": 2351, "num_data_blocks": 74, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931946, "oldest_key_time": 1763931946, "file_creation_time": 1763931995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 30175 microseconds, and 7525 cpu microseconds.
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.520067) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1093658 bytes OK
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.520102) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541392) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541459) EVENT_LOG_v1 {"time_micros": 1763931995541445, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541500) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1119028, prev total WAL file size 1119028, number of live WAL files 2.
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.542517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1068KB)], [50(12MB)]
Nov 23 16:06:35 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995542560, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 14130356, "oldest_snapshot_seqno": -1}
Nov 23 16:06:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:06:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:35.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:06:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:36.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5421 keys, 13967878 bytes, temperature: kUnknown
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931996263733, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13967878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13931263, "index_size": 21977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138383, "raw_average_key_size": 25, "raw_value_size": 13832867, "raw_average_value_size": 2551, "num_data_blocks": 897, "num_entries": 5421, "num_filter_entries": 5421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763931995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.264335) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13967878 bytes
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.326001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.6 rd, 19.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(25.7) write-amplify(12.8) OK, records in: 5948, records dropped: 527 output_compression: NoCompression
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.326055) EVENT_LOG_v1 {"time_micros": 1763931996326032, "job": 26, "event": "compaction_finished", "compaction_time_micros": 721505, "compaction_time_cpu_micros": 35437, "output_level": 6, "num_output_files": 1, "total_output_size": 13967878, "num_input_records": 5948, "num_output_records": 5421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931996326728, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931996331331, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:35.542416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.331403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.331409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.331412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.331415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:06:36.331417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 23 16:06:36 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 23 16:06:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:37.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:06:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 825 KiB/s rd, 102 B/s wr, 7 op/s
Nov 23 16:06:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:37 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:37.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:37] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:37] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 825 KiB/s rd, 102 B/s wr, 7 op/s
Nov 23 16:06:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:39 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 23 16:06:39 np0005532761 podman[262899]: 2025-11-23 21:06:39.54666512 +0000 UTC m=+0.049268221 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:06:39 np0005532761 podman[262898]: 2025-11-23 21:06:39.583751755 +0000 UTC m=+0.088480721 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:06:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:39.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 23 16:06:39 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 23 16:06:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:40.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 2 active+clean+snaptrim, 335 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Nov 23 16:06:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:41 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:41.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 2 active+clean+snaptrim, 335 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 255 B/s wr, 1 op/s
Nov 23 16:06:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:43 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:43.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:44.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 50 op/s
Nov 23 16:06:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:45 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:45.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:46.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:47.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:06:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:47.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:06:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:47.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:06:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 23 16:06:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:47.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:47] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:47] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 23 16:06:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:06:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:06:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:48.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 23 16:06:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:49 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:49.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:50.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 23 16:06:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 23 16:06:50 np0005532761 ceph-mon[74569]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 23 16:06:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Nov 23 16:06:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:51 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:51 np0005532761 podman[262979]: 2025-11-23 21:06:51.572744256 +0000 UTC m=+0.089041347 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 16:06:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:51.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:06:51.863 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:06:51.864 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:06:51.864 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:52.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Nov 23 16:06:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:53.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:54.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 80 op/s
Nov 23 16:06:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:55 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:55.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:56.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:06:57.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:06:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 80 op/s
Nov 23 16:06:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:57 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:57.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 23 16:06:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:06:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 23 16:06:58 np0005532761 nova_compute[257263]: 2025-11-23 21:06:58.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:58 np0005532761 nova_compute[257263]: 2025-11-23 21:06:58.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:06:58 np0005532761 nova_compute[257263]: 2025-11-23 21:06:58.050 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:06:58 np0005532761 nova_compute[257263]: 2025-11-23 21:06:58.051 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:58 np0005532761 nova_compute[257263]: 2025-11-23 21:06:58.051 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:06:58 np0005532761 nova_compute[257263]: 2025-11-23 21:06:58.062 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:58.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 80 op/s
Nov 23 16:06:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:06:59 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:06:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:59.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:00 np0005532761 nova_compute[257263]: 2025-11-23 21:07:00.071 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:00 np0005532761 nova_compute[257263]: 2025-11-23 21:07:00.071 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:00.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:01 np0005532761 nova_compute[257263]: 2025-11-23 21:07:01.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:01 np0005532761 nova_compute[257263]: 2025-11-23 21:07:01.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:07:01 np0005532761 nova_compute[257263]: 2025-11-23 21:07:01.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:07:01 np0005532761 nova_compute[257263]: 2025-11-23 21:07:01.049 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:07:01 np0005532761 nova_compute[257263]: 2025-11-23 21:07:01.049 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 118 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 23 16:07:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:01 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:01.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:02 np0005532761 nova_compute[257263]: 2025-11-23 21:07:02.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:02 np0005532761 nova_compute[257263]: 2025-11-23 21:07:02.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:02 np0005532761 nova_compute[257263]: 2025-11-23 21:07:02.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:07:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:02.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:07:03
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'backups', 'images', '.nfs', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes']
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 118 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 23 16:07:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:07:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:07:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:03 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:07:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:07:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:03.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.054 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.055 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.055 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.055 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.056 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:07:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:04.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:07:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984064645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.538 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.686 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.687 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4942MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.687 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.687 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.783 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.784 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.839 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing inventories for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.878 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating ProviderTree inventory for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.878 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating inventory in ProviderTree for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.892 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing aggregate associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.920 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing trait associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:07:04 np0005532761 nova_compute[257263]: 2025-11-23 21:07:04.942 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:07:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:07:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:07:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414325586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:07:05 np0005532761 nova_compute[257263]: 2025-11-23 21:07:05.353 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:07:05 np0005532761 nova_compute[257263]: 2025-11-23 21:07:05.358 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:07:05 np0005532761 nova_compute[257263]: 2025-11-23 21:07:05.369 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:07:05 np0005532761 nova_compute[257263]: 2025-11-23 21:07:05.370 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:07:05 np0005532761 nova_compute[257263]: 2025-11-23 21:07:05.370 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:05 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:05.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:06.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:07.127Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:07:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:07.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:07:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:07:07 np0005532761 nova_compute[257263]: 2025-11-23 21:07:07.372 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:07 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:07.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:07] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 23 16:07:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:07] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 23 16:07:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:08.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:07:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:09 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:09.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:10 np0005532761 podman[263088]: 2025-11-23 21:07:10.547683289 +0000 UTC m=+0.062028468 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:07:10 np0005532761 podman[263087]: 2025-11-23 21:07:10.569210371 +0000 UTC m=+0.089474838 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 16:07:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:07:11 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:07:11.218 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:07:11 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:07:11.219 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:07:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:11.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:12.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880041a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 16:07:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:13 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:14.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94001330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 18 KiB/s wr, 1 op/s
Nov 23 16:07:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:15 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:16.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:16 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:07:16.221 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:07:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94001330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:17.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:07:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 16:07:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:17 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:17.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:17] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 23 16:07:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:17] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 23 16:07:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:07:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:07:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:18.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 16:07:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:19.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:20.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:07:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:07:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:20 np0005532761 podman[263315]: 2025-11-23 21:07:20.718670317 +0000 UTC m=+0.040059604 container create d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:07:20 np0005532761 systemd[1]: Started libpod-conmon-d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada.scope.
Nov 23 16:07:20 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:07:20 np0005532761 podman[263315]: 2025-11-23 21:07:20.699603361 +0000 UTC m=+0.020992678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:07:20 np0005532761 podman[263315]: 2025-11-23 21:07:20.798823087 +0000 UTC m=+0.120212394 container init d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 16:07:20 np0005532761 podman[263315]: 2025-11-23 21:07:20.805884564 +0000 UTC m=+0.127273851 container start d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_chandrasekhar, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 16:07:20 np0005532761 podman[263315]: 2025-11-23 21:07:20.809207073 +0000 UTC m=+0.130596380 container attach d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 16:07:20 np0005532761 cranky_chandrasekhar[263332]: 167 167
Nov 23 16:07:20 np0005532761 systemd[1]: libpod-d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada.scope: Deactivated successfully.
Nov 23 16:07:20 np0005532761 conmon[263332]: conmon d50bad33743effe96a9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada.scope/container/memory.events
Nov 23 16:07:20 np0005532761 podman[263315]: 2025-11-23 21:07:20.812794798 +0000 UTC m=+0.134184085 container died d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 16:07:20 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4715e3468124e13fc740d5aef3b1de332dd7ee11a40d9b6ec940b9d99052557d-merged.mount: Deactivated successfully.
Nov 23 16:07:20 np0005532761 podman[263315]: 2025-11-23 21:07:20.855441982 +0000 UTC m=+0.176831269 container remove d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_chandrasekhar, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:07:20 np0005532761 systemd[1]: libpod-conmon-d50bad33743effe96a9d8ff2a883d7451ea72d90164990e02c095d704f972ada.scope: Deactivated successfully.
Nov 23 16:07:21 np0005532761 podman[263356]: 2025-11-23 21:07:21.014251571 +0000 UTC m=+0.043715423 container create 963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_liskov, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Nov 23 16:07:21 np0005532761 systemd[1]: Started libpod-conmon-963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0.scope.
Nov 23 16:07:21 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:07:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a3bd522785f8f5b07664e9bb22ff925975476ef535b0f04554d89de1313e4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a3bd522785f8f5b07664e9bb22ff925975476ef535b0f04554d89de1313e4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a3bd522785f8f5b07664e9bb22ff925975476ef535b0f04554d89de1313e4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a3bd522785f8f5b07664e9bb22ff925975476ef535b0f04554d89de1313e4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:21 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a3bd522785f8f5b07664e9bb22ff925975476ef535b0f04554d89de1313e4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:21 np0005532761 podman[263356]: 2025-11-23 21:07:20.996933121 +0000 UTC m=+0.026397023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:07:21 np0005532761 podman[263356]: 2025-11-23 21:07:21.101209361 +0000 UTC m=+0.130673213 container init 963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:07:21 np0005532761 podman[263356]: 2025-11-23 21:07:21.108493045 +0000 UTC m=+0.137956897 container start 963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:07:21 np0005532761 podman[263356]: 2025-11-23 21:07:21.113713043 +0000 UTC m=+0.143176915 container attach 963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 16:07:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 16:07:21 np0005532761 jovial_liskov[263373]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:07:21 np0005532761 jovial_liskov[263373]: --> All data devices are unavailable
Nov 23 16:07:21 np0005532761 systemd[1]: libpod-963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0.scope: Deactivated successfully.
Nov 23 16:07:21 np0005532761 podman[263356]: 2025-11-23 21:07:21.454412786 +0000 UTC m=+0.483876638 container died 963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_liskov, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 16:07:21 np0005532761 systemd[1]: var-lib-containers-storage-overlay-51a3bd522785f8f5b07664e9bb22ff925975476ef535b0f04554d89de1313e4f-merged.mount: Deactivated successfully.
Nov 23 16:07:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:21 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:21 np0005532761 podman[263356]: 2025-11-23 21:07:21.495327043 +0000 UTC m=+0.524790915 container remove 963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:07:21 np0005532761 systemd[1]: libpod-conmon-963eed7eb84e124af2adf5ef85713097955d85ca00af60069213dd68f676cae0.scope: Deactivated successfully.
Nov 23 16:07:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:21.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:21 np0005532761 podman[263426]: 2025-11-23 21:07:21.750907883 +0000 UTC m=+0.101128228 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 16:07:22 np0005532761 podman[263515]: 2025-11-23 21:07:22.072931369 +0000 UTC m=+0.038234386 container create c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mirzakhani, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 16:07:22 np0005532761 systemd[1]: Started libpod-conmon-c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743.scope.
Nov 23 16:07:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:07:22 np0005532761 podman[263515]: 2025-11-23 21:07:22.144651766 +0000 UTC m=+0.109954803 container init c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 16:07:22 np0005532761 podman[263515]: 2025-11-23 21:07:22.151430745 +0000 UTC m=+0.116733762 container start c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mirzakhani, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:07:22 np0005532761 podman[263515]: 2025-11-23 21:07:22.058141117 +0000 UTC m=+0.023444154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:07:22 np0005532761 podman[263515]: 2025-11-23 21:07:22.155157034 +0000 UTC m=+0.120460071 container attach c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:07:22 np0005532761 inspiring_mirzakhani[263531]: 167 167
Nov 23 16:07:22 np0005532761 systemd[1]: libpod-c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743.scope: Deactivated successfully.
Nov 23 16:07:22 np0005532761 podman[263515]: 2025-11-23 21:07:22.157107847 +0000 UTC m=+0.122410864 container died c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:07:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4628676f47ed38513727d2403137b65178b3935dff9e96bef837892bb79ecdaa-merged.mount: Deactivated successfully.
Nov 23 16:07:22 np0005532761 podman[263515]: 2025-11-23 21:07:22.190556864 +0000 UTC m=+0.155859881 container remove c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:07:22 np0005532761 systemd[1]: libpod-conmon-c514ae5f0fa405f27b614d47ca744fdf6a9112a870cffa1e3a251f7156269743.scope: Deactivated successfully.
Nov 23 16:07:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:22.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:22 np0005532761 podman[263556]: 2025-11-23 21:07:22.362925625 +0000 UTC m=+0.044754300 container create bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:07:22 np0005532761 systemd[1]: Started libpod-conmon-bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f.scope.
Nov 23 16:07:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:07:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a2d9a5829d82316304b96073ea90048e19b7ca5c67e57907b7df5d35a998a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a2d9a5829d82316304b96073ea90048e19b7ca5c67e57907b7df5d35a998a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a2d9a5829d82316304b96073ea90048e19b7ca5c67e57907b7df5d35a998a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a2d9a5829d82316304b96073ea90048e19b7ca5c67e57907b7df5d35a998a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:22 np0005532761 podman[263556]: 2025-11-23 21:07:22.344891846 +0000 UTC m=+0.026720531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:07:22 np0005532761 podman[263556]: 2025-11-23 21:07:22.452850744 +0000 UTC m=+0.134679449 container init bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_curran, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:07:22 np0005532761 podman[263556]: 2025-11-23 21:07:22.467477492 +0000 UTC m=+0.149306167 container start bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:07:22 np0005532761 podman[263556]: 2025-11-23 21:07:22.472770903 +0000 UTC m=+0.154599588 container attach bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:07:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:22 np0005532761 youthful_curran[263572]: {
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:    "1": [
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:        {
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "devices": [
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "/dev/loop3"
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            ],
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "lv_name": "ceph_lv0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "lv_size": "21470642176",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "name": "ceph_lv0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "tags": {
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.cluster_name": "ceph",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.crush_device_class": "",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.encrypted": "0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.osd_id": "1",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.type": "block",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.vdo": "0",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:                "ceph.with_tpm": "0"
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            },
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "type": "block",
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:            "vg_name": "ceph_vg0"
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:        }
Nov 23 16:07:22 np0005532761 youthful_curran[263572]:    ]
Nov 23 16:07:22 np0005532761 youthful_curran[263572]: }
Nov 23 16:07:22 np0005532761 systemd[1]: libpod-bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f.scope: Deactivated successfully.
Nov 23 16:07:22 np0005532761 podman[263556]: 2025-11-23 21:07:22.768175782 +0000 UTC m=+0.450004447 container died bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:07:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d3a2d9a5829d82316304b96073ea90048e19b7ca5c67e57907b7df5d35a998a0-merged.mount: Deactivated successfully.
Nov 23 16:07:22 np0005532761 podman[263556]: 2025-11-23 21:07:22.81214108 +0000 UTC m=+0.493969745 container remove bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 16:07:22 np0005532761 systemd[1]: libpod-conmon-bb82926b8e2bfcf0b3438c001bdab52638734c9aaa8e6440e04c12a70b025f8f.scope: Deactivated successfully.
Nov 23 16:07:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 23 16:07:23 np0005532761 podman[263685]: 2025-11-23 21:07:23.350245507 +0000 UTC m=+0.035015961 container create 852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 16:07:23 np0005532761 systemd[1]: Started libpod-conmon-852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4.scope.
Nov 23 16:07:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:07:23 np0005532761 podman[263685]: 2025-11-23 21:07:23.418269684 +0000 UTC m=+0.103040158 container init 852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_almeida, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:07:23 np0005532761 podman[263685]: 2025-11-23 21:07:23.424457939 +0000 UTC m=+0.109228393 container start 852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_almeida, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 16:07:23 np0005532761 podman[263685]: 2025-11-23 21:07:23.42752002 +0000 UTC m=+0.112290494 container attach 852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_almeida, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:07:23 np0005532761 vigilant_almeida[263701]: 167 167
Nov 23 16:07:23 np0005532761 systemd[1]: libpod-852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4.scope: Deactivated successfully.
Nov 23 16:07:23 np0005532761 podman[263685]: 2025-11-23 21:07:23.335147586 +0000 UTC m=+0.019918060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:07:23 np0005532761 podman[263685]: 2025-11-23 21:07:23.431399724 +0000 UTC m=+0.116170178 container died 852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:07:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d70c432337d065c9b6debb9ad10e09985aef3bd91b151f5e0ecaa5bd3ea46a39-merged.mount: Deactivated successfully.
Nov 23 16:07:23 np0005532761 podman[263685]: 2025-11-23 21:07:23.463727353 +0000 UTC m=+0.148497807 container remove 852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:07:23 np0005532761 systemd[1]: libpod-conmon-852afc62fcbbee901ed19db6a600dc0c8b5b6e94afd5cad47d2c4d36f35a25c4.scope: Deactivated successfully.
Nov 23 16:07:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:23 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:23 np0005532761 podman[263726]: 2025-11-23 21:07:23.614033566 +0000 UTC m=+0.036020948 container create 50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_herschel, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 16:07:23 np0005532761 systemd[1]: Started libpod-conmon-50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125.scope.
Nov 23 16:07:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:07:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2672a58fb6318d92f65e0871d014fe63e6a1883e226233af89fb9a1bc2139b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2672a58fb6318d92f65e0871d014fe63e6a1883e226233af89fb9a1bc2139b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2672a58fb6318d92f65e0871d014fe63e6a1883e226233af89fb9a1bc2139b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2672a58fb6318d92f65e0871d014fe63e6a1883e226233af89fb9a1bc2139b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:07:23 np0005532761 podman[263726]: 2025-11-23 21:07:23.693069576 +0000 UTC m=+0.115056988 container init 50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_herschel, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:07:23 np0005532761 podman[263726]: 2025-11-23 21:07:23.599433688 +0000 UTC m=+0.021421090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:07:23 np0005532761 podman[263726]: 2025-11-23 21:07:23.707391027 +0000 UTC m=+0.129378449 container start 50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_herschel, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:07:23 np0005532761 podman[263726]: 2025-11-23 21:07:23.711534697 +0000 UTC m=+0.133522099 container attach 50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:07:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:07:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:23.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:07:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:24.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:24 np0005532761 lvm[263842]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:07:24 np0005532761 lvm[263842]: VG ceph_vg0 finished
Nov 23 16:07:24 np0005532761 unruffled_herschel[263743]: {}
Nov 23 16:07:24 np0005532761 systemd[1]: libpod-50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125.scope: Deactivated successfully.
Nov 23 16:07:24 np0005532761 podman[263726]: 2025-11-23 21:07:24.377218093 +0000 UTC m=+0.799205465 container died 50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:07:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f2672a58fb6318d92f65e0871d014fe63e6a1883e226233af89fb9a1bc2139b3-merged.mount: Deactivated successfully.
Nov 23 16:07:24 np0005532761 podman[263726]: 2025-11-23 21:07:24.427411617 +0000 UTC m=+0.849399009 container remove 50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:07:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:24 np0005532761 systemd[1]: libpod-conmon-50dccf55ee377ed0df68e19849484c814c67a3ba946a283a815f7697e0369125.scope: Deactivated successfully.
Nov 23 16:07:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:07:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:07:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:24 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:24 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 23 16:07:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:27.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:07:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:27.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:07:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:27.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:07:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 23 16:07:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:27 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:27.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:27] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 23 16:07:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:27] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 23 16:07:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 23 16:07:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:29 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:07:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:29.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:07:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:30.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 23 16:07:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:31 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:31.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:32.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 23 16:07:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:07:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:07:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:07:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:07:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:07:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:07:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:07:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:07:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:33 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:07:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:07:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:34.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 23 16:07:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:35 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:36.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:37.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:07:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 23 16:07:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:37 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:07:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:07:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:37.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:38.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.658198) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058658259, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 821, "num_deletes": 251, "total_data_size": 1239716, "memory_usage": 1259312, "flush_reason": "Manual Compaction"}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058671272, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1227461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23925, "largest_seqno": 24745, "table_properties": {"data_size": 1223306, "index_size": 1871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9656, "raw_average_key_size": 19, "raw_value_size": 1214752, "raw_average_value_size": 2504, "num_data_blocks": 83, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931996, "oldest_key_time": 1763931996, "file_creation_time": 1763932058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13241 microseconds, and 6602 cpu microseconds.
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.671451) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1227461 bytes OK
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.671538) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.673439) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.673465) EVENT_LOG_v1 {"time_micros": 1763932058673458, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.673493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1235662, prev total WAL file size 1235662, number of live WAL files 2.
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.675925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1198KB)], [53(13MB)]
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058675982, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15195339, "oldest_snapshot_seqno": -1}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5386 keys, 13042479 bytes, temperature: kUnknown
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058780378, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 13042479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13006836, "index_size": 21069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138419, "raw_average_key_size": 25, "raw_value_size": 12909720, "raw_average_value_size": 2396, "num_data_blocks": 855, "num_entries": 5386, "num_filter_entries": 5386, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.780892) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13042479 bytes
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.782133) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.4 rd, 124.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(23.0) write-amplify(10.6) OK, records in: 5906, records dropped: 520 output_compression: NoCompression
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.782153) EVENT_LOG_v1 {"time_micros": 1763932058782145, "job": 28, "event": "compaction_finished", "compaction_time_micros": 104494, "compaction_time_cpu_micros": 42555, "output_level": 6, "num_output_files": 1, "total_output_size": 13042479, "num_input_records": 5906, "num_output_records": 5386, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058782787, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058786596, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.786635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.786640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.786642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.786644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:07:38.786646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 23 16:07:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:39 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:39.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:07:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:40.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:07:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab780026d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 177 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 84 op/s
Nov 23 16:07:41 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 16:07:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:41 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:41 np0005532761 podman[263905]: 2025-11-23 21:07:41.572834483 +0000 UTC m=+0.085769259 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 23 16:07:41 np0005532761 podman[263904]: 2025-11-23 21:07:41.581776052 +0000 UTC m=+0.101136788 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:07:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:41.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab780026d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 177 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 1020 KiB/s wr, 20 op/s
Nov 23 16:07:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:43 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:43.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 653 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 23 16:07:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:45 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:47.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:07:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:47.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:07:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:47.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:07:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:07:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:07:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:07:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:07:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:07:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:48.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 23 16:07:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:49 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:49.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 163 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Nov 23 16:07:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:51 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:07:51.864 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:07:51.865 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:07:51.865 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:52 np0005532761 podman[263986]: 2025-11-23 21:07:52.57448172 +0000 UTC m=+0.088746209 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 16:07:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 163 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 1.2 MiB/s wr, 63 op/s
Nov 23 16:07:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:53.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:54.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 1.2 MiB/s wr, 85 op/s
Nov 23 16:07:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:55 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:55.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:07:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:07:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:57.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:07:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:57.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:07:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:07:57.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:07:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 23 KiB/s wr, 29 op/s
Nov 23 16:07:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:57 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:07:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:07:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:07:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:57.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 118 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 23 KiB/s wr, 30 op/s
Nov 23 16:07:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:07:59 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:07:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:00 np0005532761 nova_compute[257263]: 2025-11-23 21:08:00.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:00.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:01 np0005532761 nova_compute[257263]: 2025-11-23 21:08:01.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Nov 23 16:08:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:01 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:01.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:02 np0005532761 nova_compute[257263]: 2025-11-23 21:08:02.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:02 np0005532761 nova_compute[257263]: 2025-11-23 21:08:02.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:08:02 np0005532761 nova_compute[257263]: 2025-11-23 21:08:02.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:08:02 np0005532761 nova_compute[257263]: 2025-11-23 21:08:02.052 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:08:02 np0005532761 nova_compute[257263]: 2025-11-23 21:08:02.052 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:02 np0005532761 nova_compute[257263]: 2025-11-23 21:08:02.052 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:02.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210802 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:08:03
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.control', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'volumes', 'default.rgw.meta', 'images', '.rgw.root']
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.7 KiB/s wr, 50 op/s
Nov 23 16:08:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:08:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:08:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:08:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:03 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:03.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:04 np0005532761 nova_compute[257263]: 2025-11-23 21:08:04.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:04 np0005532761 nova_compute[257263]: 2025-11-23 21:08:04.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:08:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:04.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:05 np0005532761 nova_compute[257263]: 2025-11-23 21:08:05.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.7 KiB/s wr, 50 op/s
Nov 23 16:08:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:05 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:05.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.064 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.064 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.065 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.065 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.066 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:06.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:08:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1512843561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.521 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.659 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.660 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.661 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.661 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.754 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.754 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:08:06 np0005532761 nova_compute[257263]: 2025-11-23 21:08:06.782 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:07.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:08:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:07.135Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:08:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 23 16:08:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:08:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123520336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:08:07 np0005532761 nova_compute[257263]: 2025-11-23 21:08:07.235 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:07 np0005532761 nova_compute[257263]: 2025-11-23 21:08:07.243 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:08:07 np0005532761 nova_compute[257263]: 2025-11-23 21:08:07.256 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:08:07 np0005532761 nova_compute[257263]: 2025-11-23 21:08:07.258 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:08:07 np0005532761 nova_compute[257263]: 2025-11-23 21:08:07.258 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:07 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/210807 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:08:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:07] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Nov 23 16:08:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:07] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Nov 23 16:08:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:07.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:08.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c003e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 16:08:09 np0005532761 nova_compute[257263]: 2025-11-23 21:08:09.259 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:09 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:10.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 23 16:08:11 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:08:11.350 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:08:11 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:08:11.350 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:08:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:08:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:11.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:12 np0005532761 podman[264100]: 2025-11-23 21:08:12.209795583 +0000 UTC m=+0.070280068 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:08:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:12.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:12 np0005532761 podman[264099]: 2025-11-23 21:08:12.274198404 +0000 UTC m=+0.139060946 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 23 16:08:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88001250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 16:08:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:13 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:13.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:14.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:08:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:08:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 23 16:08:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:08:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/74443691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:08:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:15 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88001250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:15.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:16.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:16 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:08:16.353 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:17.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:08:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:17.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:08:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:17.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:08:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 23 16:08:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:17 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:17] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Nov 23 16:08:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:17] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Nov 23 16:08:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:08:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:08:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:18.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 53 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 823 KiB/s wr, 26 op/s
Nov 23 16:08:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:19.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:20.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:08:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:21 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:21.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:08:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:23 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:23 np0005532761 podman[264159]: 2025-11-23 21:08:23.556364006 +0000 UTC m=+0.076886394 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:08:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:23.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:24.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:08:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:08:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:08:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:25.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:26 np0005532761 podman[264385]: 2025-11-23 21:08:26.261697594 +0000 UTC m=+0.050639716 container create d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:08:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:08:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:08:26 np0005532761 systemd[1]: Started libpod-conmon-d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3.scope.
Nov 23 16:08:26 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:08:26 np0005532761 podman[264385]: 2025-11-23 21:08:26.242205666 +0000 UTC m=+0.031147808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:08:26 np0005532761 podman[264385]: 2025-11-23 21:08:26.336124002 +0000 UTC m=+0.125066164 container init d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 16:08:26 np0005532761 podman[264385]: 2025-11-23 21:08:26.341612998 +0000 UTC m=+0.130555120 container start d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:08:26 np0005532761 podman[264385]: 2025-11-23 21:08:26.344367941 +0000 UTC m=+0.133310103 container attach d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_panini, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:08:26 np0005532761 hardcore_panini[264401]: 167 167
Nov 23 16:08:26 np0005532761 systemd[1]: libpod-d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3.scope: Deactivated successfully.
Nov 23 16:08:26 np0005532761 podman[264385]: 2025-11-23 21:08:26.34845987 +0000 UTC m=+0.137402002 container died d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_panini, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:08:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e46a8b4d76b3ce6da5c1b73136abb9c12417ca159d59d9decb55b59374314a5c-merged.mount: Deactivated successfully.
Nov 23 16:08:26 np0005532761 podman[264385]: 2025-11-23 21:08:26.387473966 +0000 UTC m=+0.176416088 container remove d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_panini, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:08:26 np0005532761 systemd[1]: libpod-conmon-d106f6af9143c96b96d7ce8013f73b5f1e54a05a4d306933237ba32965af67b3.scope: Deactivated successfully.
Nov 23 16:08:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8001450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:26 np0005532761 podman[264423]: 2025-11-23 21:08:26.551652008 +0000 UTC m=+0.040340093 container create b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:08:26 np0005532761 systemd[1]: Started libpod-conmon-b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9.scope.
Nov 23 16:08:26 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:08:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701b5670d41b104dc0749b0719b40d13bbc83a86cd5ec89ccde3ab646b1a1e10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701b5670d41b104dc0749b0719b40d13bbc83a86cd5ec89ccde3ab646b1a1e10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701b5670d41b104dc0749b0719b40d13bbc83a86cd5ec89ccde3ab646b1a1e10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701b5670d41b104dc0749b0719b40d13bbc83a86cd5ec89ccde3ab646b1a1e10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:26 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701b5670d41b104dc0749b0719b40d13bbc83a86cd5ec89ccde3ab646b1a1e10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:08:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:08:26 np0005532761 podman[264423]: 2025-11-23 21:08:26.614623861 +0000 UTC m=+0.103311976 container init b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 16:08:26 np0005532761 podman[264423]: 2025-11-23 21:08:26.622722647 +0000 UTC m=+0.111410742 container start b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meninsky, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:08:26 np0005532761 podman[264423]: 2025-11-23 21:08:26.629580689 +0000 UTC m=+0.118268794 container attach b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:08:26 np0005532761 podman[264423]: 2025-11-23 21:08:26.534423131 +0000 UTC m=+0.023111246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:08:26 np0005532761 keen_meninsky[264440]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:08:26 np0005532761 keen_meninsky[264440]: --> All data devices are unavailable
Nov 23 16:08:26 np0005532761 systemd[1]: libpod-b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9.scope: Deactivated successfully.
Nov 23 16:08:26 np0005532761 podman[264423]: 2025-11-23 21:08:26.974466432 +0000 UTC m=+0.463154527 container died b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:08:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-701b5670d41b104dc0749b0719b40d13bbc83a86cd5ec89ccde3ab646b1a1e10-merged.mount: Deactivated successfully.
Nov 23 16:08:27 np0005532761 podman[264423]: 2025-11-23 21:08:27.014411133 +0000 UTC m=+0.503099228 container remove b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_meninsky, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:08:27 np0005532761 systemd[1]: libpod-conmon-b46d12ba712de0e25830f647b9159911a89d143af4ce0a57e115d243db1945a9.scope: Deactivated successfully.
Nov 23 16:08:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:27.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:08:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:27.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:08:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 16:08:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:27 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:27 np0005532761 podman[264558]: 2025-11-23 21:08:27.550197639 +0000 UTC m=+0.039879650 container create 24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:08:27 np0005532761 systemd[1]: Started libpod-conmon-24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9.scope.
Nov 23 16:08:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:08:27 np0005532761 podman[264558]: 2025-11-23 21:08:27.533521546 +0000 UTC m=+0.023203577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:08:27 np0005532761 podman[264558]: 2025-11-23 21:08:27.635637409 +0000 UTC m=+0.125319440 container init 24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Nov 23 16:08:27 np0005532761 podman[264558]: 2025-11-23 21:08:27.642025199 +0000 UTC m=+0.131707210 container start 24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:08:27 np0005532761 podman[264558]: 2025-11-23 21:08:27.646633441 +0000 UTC m=+0.136315472 container attach 24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:08:27 np0005532761 laughing_cerf[264574]: 167 167
Nov 23 16:08:27 np0005532761 systemd[1]: libpod-24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9.scope: Deactivated successfully.
Nov 23 16:08:27 np0005532761 conmon[264574]: conmon 24c3a5c1c7637accfbda <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9.scope/container/memory.events
Nov 23 16:08:27 np0005532761 podman[264558]: 2025-11-23 21:08:27.648549402 +0000 UTC m=+0.138231443 container died 24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cerf, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 16:08:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-894e6cf0c85d387716ac6582cdcbe886406366564c7b01ffe63b803ae462fa8c-merged.mount: Deactivated successfully.
Nov 23 16:08:27 np0005532761 podman[264558]: 2025-11-23 21:08:27.686879371 +0000 UTC m=+0.176561382 container remove 24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cerf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:08:27 np0005532761 systemd[1]: libpod-conmon-24c3a5c1c7637accfbda562e090e3f6763ead42a43248e01b1367015f2d477d9.scope: Deactivated successfully.
Nov 23 16:08:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:27] "GET /metrics HTTP/1.1" 200 48439 "" "Prometheus/2.51.0"
Nov 23 16:08:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:27] "GET /metrics HTTP/1.1" 200 48439 "" "Prometheus/2.51.0"
Nov 23 16:08:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:27.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:27 np0005532761 podman[264599]: 2025-11-23 21:08:27.880475004 +0000 UTC m=+0.052760253 container create 8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:08:27 np0005532761 systemd[1]: Started libpod-conmon-8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2.scope.
Nov 23 16:08:27 np0005532761 podman[264599]: 2025-11-23 21:08:27.851987468 +0000 UTC m=+0.024272767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:08:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:08:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57221c427b167ff4c0cb63e1ae54ccf10f818e83ab5cf9ad3f38946643aa4879/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57221c427b167ff4c0cb63e1ae54ccf10f818e83ab5cf9ad3f38946643aa4879/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57221c427b167ff4c0cb63e1ae54ccf10f818e83ab5cf9ad3f38946643aa4879/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57221c427b167ff4c0cb63e1ae54ccf10f818e83ab5cf9ad3f38946643aa4879/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:27 np0005532761 podman[264599]: 2025-11-23 21:08:27.975573901 +0000 UTC m=+0.147859180 container init 8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 16:08:27 np0005532761 podman[264599]: 2025-11-23 21:08:27.983915573 +0000 UTC m=+0.156200822 container start 8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chandrasekhar, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Nov 23 16:08:27 np0005532761 podman[264599]: 2025-11-23 21:08:27.987085577 +0000 UTC m=+0.159370816 container attach 8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chandrasekhar, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]: {
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:    "1": [
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:        {
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "devices": [
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "/dev/loop3"
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            ],
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "lv_name": "ceph_lv0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "lv_size": "21470642176",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "name": "ceph_lv0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "tags": {
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.cluster_name": "ceph",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.crush_device_class": "",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.encrypted": "0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.osd_id": "1",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.type": "block",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.vdo": "0",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:                "ceph.with_tpm": "0"
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            },
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "type": "block",
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:            "vg_name": "ceph_vg0"
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:        }
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]:    ]
Nov 23 16:08:28 np0005532761 gifted_chandrasekhar[264616]: }
Nov 23 16:08:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:28.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:28 np0005532761 systemd[1]: libpod-8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2.scope: Deactivated successfully.
Nov 23 16:08:28 np0005532761 podman[264599]: 2025-11-23 21:08:28.293100098 +0000 UTC m=+0.465385397 container died 8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chandrasekhar, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 16:08:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-57221c427b167ff4c0cb63e1ae54ccf10f818e83ab5cf9ad3f38946643aa4879-merged.mount: Deactivated successfully.
Nov 23 16:08:28 np0005532761 podman[264599]: 2025-11-23 21:08:28.340911938 +0000 UTC m=+0.513197177 container remove 8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 16:08:28 np0005532761 systemd[1]: libpod-conmon-8da582bbb047b024ce44bc5fe3a67c9b7b0fbf3f709c25628274a6ff6a4578c2.scope: Deactivated successfully.
Nov 23 16:08:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:28 np0005532761 podman[264730]: 2025-11-23 21:08:28.921758751 +0000 UTC m=+0.053563445 container create 3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:08:28 np0005532761 systemd[1]: Started libpod-conmon-3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c.scope.
Nov 23 16:08:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:08:28 np0005532761 podman[264730]: 2025-11-23 21:08:28.897361612 +0000 UTC m=+0.029166356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:08:28 np0005532761 podman[264730]: 2025-11-23 21:08:28.994262677 +0000 UTC m=+0.126067351 container init 3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_beaver, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 16:08:29 np0005532761 podman[264730]: 2025-11-23 21:08:29.000908724 +0000 UTC m=+0.132713408 container start 3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:08:29 np0005532761 podman[264730]: 2025-11-23 21:08:29.004155371 +0000 UTC m=+0.135960065 container attach 3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_beaver, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 16:08:29 np0005532761 awesome_beaver[264746]: 167 167
Nov 23 16:08:29 np0005532761 systemd[1]: libpod-3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c.scope: Deactivated successfully.
Nov 23 16:08:29 np0005532761 podman[264730]: 2025-11-23 21:08:29.007174271 +0000 UTC m=+0.138978955 container died 3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 16:08:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-20dae071983262e37746e3f541aacdd1e2d76f7fa00a922769b4af6374f7d6fe-merged.mount: Deactivated successfully.
Nov 23 16:08:29 np0005532761 podman[264730]: 2025-11-23 21:08:29.044614215 +0000 UTC m=+0.176418889 container remove 3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 16:08:29 np0005532761 systemd[1]: libpod-conmon-3b522fa5e1df9f8e0efe1763e5ef1d8e41d4498df9fb9ca31664ff8c025b3b6c.scope: Deactivated successfully.
Nov 23 16:08:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 16:08:29 np0005532761 podman[264771]: 2025-11-23 21:08:29.240726255 +0000 UTC m=+0.059842050 container create 2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:08:29 np0005532761 systemd[1]: Started libpod-conmon-2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b.scope.
Nov 23 16:08:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:08:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675ffb3a2e44ef664aa60f708354ca437a963412ed1dcbb4aa26ad4273bee636/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675ffb3a2e44ef664aa60f708354ca437a963412ed1dcbb4aa26ad4273bee636/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675ffb3a2e44ef664aa60f708354ca437a963412ed1dcbb4aa26ad4273bee636/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675ffb3a2e44ef664aa60f708354ca437a963412ed1dcbb4aa26ad4273bee636/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:29 np0005532761 podman[264771]: 2025-11-23 21:08:29.300077343 +0000 UTC m=+0.119193168 container init 2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:08:29 np0005532761 podman[264771]: 2025-11-23 21:08:29.312799701 +0000 UTC m=+0.131915506 container start 2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:08:29 np0005532761 podman[264771]: 2025-11-23 21:08:29.317077874 +0000 UTC m=+0.136193679 container attach 2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 16:08:29 np0005532761 podman[264771]: 2025-11-23 21:08:29.223763085 +0000 UTC m=+0.042878910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:08:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:29 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:29.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:29 np0005532761 lvm[264863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:08:29 np0005532761 lvm[264863]: VG ceph_vg0 finished
Nov 23 16:08:29 np0005532761 boring_elbakyan[264788]: {}
Nov 23 16:08:30 np0005532761 systemd[1]: libpod-2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b.scope: Deactivated successfully.
Nov 23 16:08:30 np0005532761 systemd[1]: libpod-2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b.scope: Consumed 1.071s CPU time.
Nov 23 16:08:30 np0005532761 podman[264771]: 2025-11-23 21:08:30.014773351 +0000 UTC m=+0.833889196 container died 2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Nov 23 16:08:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-675ffb3a2e44ef664aa60f708354ca437a963412ed1dcbb4aa26ad4273bee636-merged.mount: Deactivated successfully.
Nov 23 16:08:30 np0005532761 podman[264771]: 2025-11-23 21:08:30.061273778 +0000 UTC m=+0.880389583 container remove 2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:08:30 np0005532761 systemd[1]: libpod-conmon-2e5566cf5857d8d8cde3c14ef71b6845078e267fff28217c4ec52dde2c59057b.scope: Deactivated successfully.
Nov 23 16:08:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:08:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:08:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:30.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1004 KiB/s wr, 77 op/s
Nov 23 16:08:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:31 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:31.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:32.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:08:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:08:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:08:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:08:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:08:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:08:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:08:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:08:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:08:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:33 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:33.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:34.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:08:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 109 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Nov 23 16:08:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:35 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:35.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:36.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:37.138Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:08:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:37.138Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:08:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:37.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:08:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 109 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 2.0 MiB/s wr, 37 op/s
Nov 23 16:08:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:37 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:37] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 23 16:08:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:37] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 23 16:08:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:37.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Nov 23 16:08:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:39 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:39.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:40.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 23 16:08:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:41 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:42.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:42 np0005532761 podman[264916]: 2025-11-23 21:08:42.551393393 +0000 UTC m=+0.062650815 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:08:42 np0005532761 podman[264915]: 2025-11-23 21:08:42.576195843 +0000 UTC m=+0.088743250 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Nov 23 16:08:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 23 16:08:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:43 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:43.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:44.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:08:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 23 16:08:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:45 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:46.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:47.140Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:08:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:47.140Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:08:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:47.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:08:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 107 KiB/s wr, 21 op/s
Nov 23 16:08:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:47] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 23 16:08:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:47] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 23 16:08:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:47.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:08:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:08:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:48.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 107 KiB/s wr, 22 op/s
Nov 23 16:08:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:49 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:49.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:50.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 104 KiB/s wr, 10 op/s
Nov 23 16:08:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:51 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:51.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:08:51.866 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:08:51.866 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:08:51.866 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:52.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Nov 23 16:08:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:53.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:08:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:54.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:08:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:08:54 np0005532761 podman[264997]: 2025-11-23 21:08:54.548279735 +0000 UTC m=+0.065819110 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:08:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 17 KiB/s wr, 1 op/s
Nov 23 16:08:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:55 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:55.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:56.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:08:57.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:08:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.7 KiB/s wr, 1 op/s
Nov 23 16:08:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:57 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:57] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:08:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:08:57] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:08:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:57.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:08:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5656 writes, 25K keys, 5656 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5656 writes, 5656 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1489 writes, 6680 keys, 1489 commit groups, 1.0 writes per commit group, ingest: 11.09 MB, 0.02 MB/s#012Interval WAL: 1489 writes, 1489 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     80.6      0.50              0.10        14    0.036       0      0       0.0       0.0#012  L6      1/0   12.44 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2     67.9     58.2      2.91              0.44        13    0.224     67K   6905       0.0       0.0#012 Sum      1/0   12.44 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2     57.9     61.5      3.41              0.54        27    0.126     67K   6905       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     46.3     46.6      1.93              0.25        12    0.160     34K   3090       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     67.9     58.2      2.91              0.44        13    0.224     67K   6905       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     81.2      0.50              0.10        13    0.038       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.1 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.12 MB/s write, 0.19 GB read, 0.11 MB/s read, 3.4 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf3f93d350#2 capacity: 304.00 MB usage: 14.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000142 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(806,14.06 MB,4.62472%) FilterBlock(28,201.80 KB,0.0648248%) IndexBlock(28,359.64 KB,0.11553%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 16:08:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 23 16:08:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:08:59 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:08:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:59.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:00 np0005532761 nova_compute[257263]: 2025-11-23 21:09:00.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:01 np0005532761 nova_compute[257263]: 2025-11-23 21:09:01.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 111 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 5.9 KiB/s wr, 3 op/s
Nov 23 16:09:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:01 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:01.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:02 np0005532761 nova_compute[257263]: 2025-11-23 21:09:02.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:02.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:03 np0005532761 nova_compute[257263]: 2025-11-23 21:09:03.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:09:03
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', '.mgr', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:09:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:09:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 111 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 1.2 KiB/s wr, 2 op/s
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000662551584782184 of space, bias 1.0, pg target 0.1987654754346552 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:09:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:09:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:03 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:03.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:04 np0005532761 nova_compute[257263]: 2025-11-23 21:09:04.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:04 np0005532761 nova_compute[257263]: 2025-11-23 21:09:04.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:09:04 np0005532761 nova_compute[257263]: 2025-11-23 21:09:04.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:09:04 np0005532761 nova_compute[257263]: 2025-11-23 21:09:04.051 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:09:04 np0005532761 nova_compute[257263]: 2025-11-23 21:09:04.053 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:04 np0005532761 nova_compute[257263]: 2025-11-23 21:09:04.053 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:09:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:09:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 23 16:09:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:05 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:06 np0005532761 nova_compute[257263]: 2025-11-23 21:09:06.048 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:06.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:07.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:09:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 16:09:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:07 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:07] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 23 16:09:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:07] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 23 16:09:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:07.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.070 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.071 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.071 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.071 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.071 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:08.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:09:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052550188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.512 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.752 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.753 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4911MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.754 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.754 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.806 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.807 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:09:08 np0005532761 nova_compute[257263]: 2025-11-23 21:09:08.822 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 16:09:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:09:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77389808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:09:09 np0005532761 nova_compute[257263]: 2025-11-23 21:09:09.293 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:09 np0005532761 nova_compute[257263]: 2025-11-23 21:09:09.299 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:09:09 np0005532761 nova_compute[257263]: 2025-11-23 21:09:09.320 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:09:09 np0005532761 nova_compute[257263]: 2025-11-23 21:09:09.322 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:09:09 np0005532761 nova_compute[257263]: 2025-11-23 21:09:09.322 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:09 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:09.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:10.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab780010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 16:09:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:11 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:09:11.834 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:09:11 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:09:11.835 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:09:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:11.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:12.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 26 op/s
Nov 23 16:09:13 np0005532761 podman[265111]: 2025-11-23 21:09:13.535857816 +0000 UTC m=+0.051319295 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 16:09:13 np0005532761 podman[265110]: 2025-11-23 21:09:13.561787514 +0000 UTC m=+0.080089168 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:09:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:13 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab780010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:13.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:14.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:09:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 27 op/s
Nov 23 16:09:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:15 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:15.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:16.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab780010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:17.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:09:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 16:09:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:17 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:17] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 23 16:09:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:17] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 23 16:09:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:17.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:09:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:09:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab780010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:09:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:19 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:09:19.837 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:19.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 16:09:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:21 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 16:09:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:23 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78001410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:23.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:09:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 16:09:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:25 np0005532761 podman[265193]: 2025-11-23 21:09:25.59300526 +0000 UTC m=+0.098326474 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 23 16:09:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:25.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:26.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:27.146Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:09:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:27.146Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:09:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:27.146Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:09:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 16:09:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:27 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:09:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:09:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:28.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 84 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 23 16:09:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:29 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:30.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:09:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:31 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:31 np0005532761 podman[265396]: 2025-11-23 21:09:31.836121655 +0000 UTC m=+0.039477159 container create 51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:09:31 np0005532761 systemd[1]: Started libpod-conmon-51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337.scope.
Nov 23 16:09:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:09:31 np0005532761 podman[265396]: 2025-11-23 21:09:31.909561906 +0000 UTC m=+0.112917410 container init 51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 16:09:31 np0005532761 podman[265396]: 2025-11-23 21:09:31.818188939 +0000 UTC m=+0.021544453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:09:31 np0005532761 podman[265396]: 2025-11-23 21:09:31.915438043 +0000 UTC m=+0.118793537 container start 51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:09:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:31.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:31 np0005532761 podman[265396]: 2025-11-23 21:09:31.918915325 +0000 UTC m=+0.122270809 container attach 51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 16:09:31 np0005532761 determined_chebyshev[265413]: 167 167
Nov 23 16:09:31 np0005532761 systemd[1]: libpod-51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337.scope: Deactivated successfully.
Nov 23 16:09:31 np0005532761 podman[265396]: 2025-11-23 21:09:31.920956189 +0000 UTC m=+0.124311683 container died 51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chebyshev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Nov 23 16:09:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d0938d925e9f527093959b8933de702485ddd7b59a1cfd231f141b4f016b5775-merged.mount: Deactivated successfully.
Nov 23 16:09:31 np0005532761 podman[265396]: 2025-11-23 21:09:31.953914405 +0000 UTC m=+0.157269899 container remove 51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chebyshev, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:09:31 np0005532761 systemd[1]: libpod-conmon-51906a1a1933f5e7070edd307c79b576baf2f9f584fead88244a0fac2c00b337.scope: Deactivated successfully.
Nov 23 16:09:32 np0005532761 podman[265439]: 2025-11-23 21:09:32.157484583 +0000 UTC m=+0.055960148 container create e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 16:09:32 np0005532761 systemd[1]: Started libpod-conmon-e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881.scope.
Nov 23 16:09:32 np0005532761 podman[265439]: 2025-11-23 21:09:32.134460692 +0000 UTC m=+0.032936337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:09:32 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:09:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a5bca83c37108cdcda28dde1126abfbf3150067663fd820144e2d623e54e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a5bca83c37108cdcda28dde1126abfbf3150067663fd820144e2d623e54e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a5bca83c37108cdcda28dde1126abfbf3150067663fd820144e2d623e54e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a5bca83c37108cdcda28dde1126abfbf3150067663fd820144e2d623e54e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:32 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a5bca83c37108cdcda28dde1126abfbf3150067663fd820144e2d623e54e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:32 np0005532761 podman[265439]: 2025-11-23 21:09:32.262286568 +0000 UTC m=+0.160762203 container init e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_roentgen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 16:09:32 np0005532761 podman[265439]: 2025-11-23 21:09:32.277062401 +0000 UTC m=+0.175537966 container start e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:09:32 np0005532761 podman[265439]: 2025-11-23 21:09:32.281108289 +0000 UTC m=+0.179583854 container attach e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_roentgen, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 16:09:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:32.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:32 np0005532761 fervent_roentgen[265456]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:09:32 np0005532761 fervent_roentgen[265456]: --> All data devices are unavailable
Nov 23 16:09:32 np0005532761 systemd[1]: libpod-e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881.scope: Deactivated successfully.
Nov 23 16:09:32 np0005532761 podman[265439]: 2025-11-23 21:09:32.63946077 +0000 UTC m=+0.537936345 container died e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:09:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a67a5bca83c37108cdcda28dde1126abfbf3150067663fd820144e2d623e54e1-merged.mount: Deactivated successfully.
Nov 23 16:09:32 np0005532761 podman[265439]: 2025-11-23 21:09:32.679319598 +0000 UTC m=+0.577795163 container remove e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 16:09:32 np0005532761 systemd[1]: libpod-conmon-e2e095158f965b7b52356898de28c7b79844541d3c3632d3651ff4480b02c881.scope: Deactivated successfully.
Nov 23 16:09:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:09:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:09:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 23 16:09:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:09:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:09:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:09:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:09:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:09:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:09:33 np0005532761 podman[265575]: 2025-11-23 21:09:33.260280284 +0000 UTC m=+0.043261060 container create 5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Nov 23 16:09:33 np0005532761 systemd[1]: Started libpod-conmon-5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8.scope.
Nov 23 16:09:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:09:33 np0005532761 podman[265575]: 2025-11-23 21:09:33.328445786 +0000 UTC m=+0.111426592 container init 5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:09:33 np0005532761 podman[265575]: 2025-11-23 21:09:33.33465414 +0000 UTC m=+0.117634926 container start 5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:09:33 np0005532761 podman[265575]: 2025-11-23 21:09:33.337384293 +0000 UTC m=+0.120365079 container attach 5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bardeen, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:09:33 np0005532761 podman[265575]: 2025-11-23 21:09:33.244215978 +0000 UTC m=+0.027196784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:09:33 np0005532761 quizzical_bardeen[265592]: 167 167
Nov 23 16:09:33 np0005532761 systemd[1]: libpod-5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8.scope: Deactivated successfully.
Nov 23 16:09:33 np0005532761 conmon[265592]: conmon 5ee7a8ae70df5b8627cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8.scope/container/memory.events
Nov 23 16:09:33 np0005532761 podman[265575]: 2025-11-23 21:09:33.342991672 +0000 UTC m=+0.125972458 container died 5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:09:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-425b2eedfbbd42e1adf58c0e57ade59a16982723544ccc95b5810e86cf85dbc8-merged.mount: Deactivated successfully.
Nov 23 16:09:33 np0005532761 podman[265575]: 2025-11-23 21:09:33.379930403 +0000 UTC m=+0.162911189 container remove 5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:09:33 np0005532761 systemd[1]: libpod-conmon-5ee7a8ae70df5b8627cd6850834ebef7b642b4ce83d3fabc2b14eeb46eb12aa8.scope: Deactivated successfully.
Nov 23 16:09:33 np0005532761 podman[265616]: 2025-11-23 21:09:33.563791819 +0000 UTC m=+0.043503907 container create ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cerf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 16:09:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:33 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:33 np0005532761 systemd[1]: Started libpod-conmon-ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b.scope.
Nov 23 16:09:33 np0005532761 podman[265616]: 2025-11-23 21:09:33.546285603 +0000 UTC m=+0.025997761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:09:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:09:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9beac7eecec302a2639cf53904364e2b92facd40637e3de7be3e7e318c5f0c40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9beac7eecec302a2639cf53904364e2b92facd40637e3de7be3e7e318c5f0c40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9beac7eecec302a2639cf53904364e2b92facd40637e3de7be3e7e318c5f0c40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9beac7eecec302a2639cf53904364e2b92facd40637e3de7be3e7e318c5f0c40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:33 np0005532761 podman[265616]: 2025-11-23 21:09:33.666666112 +0000 UTC m=+0.146378250 container init ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:09:33 np0005532761 podman[265616]: 2025-11-23 21:09:33.679490742 +0000 UTC m=+0.159202880 container start ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cerf, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:09:33 np0005532761 podman[265616]: 2025-11-23 21:09:33.684084335 +0000 UTC m=+0.163796533 container attach ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cerf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:09:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]: {
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:    "1": [
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:        {
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "devices": [
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "/dev/loop3"
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            ],
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "lv_name": "ceph_lv0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "lv_size": "21470642176",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "name": "ceph_lv0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "tags": {
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.cluster_name": "ceph",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.crush_device_class": "",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.encrypted": "0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.osd_id": "1",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.type": "block",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.vdo": "0",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:                "ceph.with_tpm": "0"
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            },
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "type": "block",
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:            "vg_name": "ceph_vg0"
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:        }
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]:    ]
Nov 23 16:09:33 np0005532761 dazzling_cerf[265633]: }
Nov 23 16:09:33 np0005532761 systemd[1]: libpod-ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b.scope: Deactivated successfully.
Nov 23 16:09:33 np0005532761 podman[265616]: 2025-11-23 21:09:33.979210586 +0000 UTC m=+0.458922674 container died ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:09:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9beac7eecec302a2639cf53904364e2b92facd40637e3de7be3e7e318c5f0c40-merged.mount: Deactivated successfully.
Nov 23 16:09:34 np0005532761 podman[265616]: 2025-11-23 21:09:34.01359039 +0000 UTC m=+0.493302488 container remove ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 16:09:34 np0005532761 systemd[1]: libpod-conmon-ace6e65dfdf2d290b6ec0c9f09d62d02fd1724bac973b07043bea59454e7705b.scope: Deactivated successfully.
Nov 23 16:09:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:34.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:34 np0005532761 podman[265746]: 2025-11-23 21:09:34.518759692 +0000 UTC m=+0.033126222 container create 7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 16:09:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:09:34 np0005532761 systemd[1]: Started libpod-conmon-7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830.scope.
Nov 23 16:09:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:09:34 np0005532761 podman[265746]: 2025-11-23 21:09:34.505697315 +0000 UTC m=+0.020063865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:09:34 np0005532761 podman[265746]: 2025-11-23 21:09:34.60524156 +0000 UTC m=+0.119608120 container init 7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 16:09:34 np0005532761 podman[265746]: 2025-11-23 21:09:34.612651196 +0000 UTC m=+0.127017736 container start 7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 16:09:34 np0005532761 podman[265746]: 2025-11-23 21:09:34.616034076 +0000 UTC m=+0.130400606 container attach 7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_haslett, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:09:34 np0005532761 recursing_haslett[265764]: 167 167
Nov 23 16:09:34 np0005532761 systemd[1]: libpod-7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830.scope: Deactivated successfully.
Nov 23 16:09:34 np0005532761 podman[265746]: 2025-11-23 21:09:34.620878315 +0000 UTC m=+0.135244845 container died 7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:09:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-155024669ae4bc47c2c2001ac404058e4cbca69fce6abb79ef9c1ca3fd0c0e31-merged.mount: Deactivated successfully.
Nov 23 16:09:34 np0005532761 podman[265746]: 2025-11-23 21:09:34.656640005 +0000 UTC m=+0.171006525 container remove 7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:09:34 np0005532761 systemd[1]: libpod-conmon-7112d7c47fa57b6958e6ad630f6e4a9e0683ebb85272a1b6bd6fd178e05f6830.scope: Deactivated successfully.
Nov 23 16:09:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:34 np0005532761 podman[265790]: 2025-11-23 21:09:34.822454471 +0000 UTC m=+0.039722587 container create fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:09:34 np0005532761 systemd[1]: Started libpod-conmon-fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14.scope.
Nov 23 16:09:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:09:34 np0005532761 podman[265790]: 2025-11-23 21:09:34.80511216 +0000 UTC m=+0.022380286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:09:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961b6b698b67260aeadc355daf9cc266b0f9d9ff258f5f95be71570abc33c1e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961b6b698b67260aeadc355daf9cc266b0f9d9ff258f5f95be71570abc33c1e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961b6b698b67260aeadc355daf9cc266b0f9d9ff258f5f95be71570abc33c1e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961b6b698b67260aeadc355daf9cc266b0f9d9ff258f5f95be71570abc33c1e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:34 np0005532761 podman[265790]: 2025-11-23 21:09:34.914258539 +0000 UTC m=+0.131526675 container init fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:09:34 np0005532761 podman[265790]: 2025-11-23 21:09:34.921010349 +0000 UTC m=+0.138278445 container start fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:09:34 np0005532761 podman[265790]: 2025-11-23 21:09:34.924270635 +0000 UTC m=+0.141538751 container attach fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 16:09:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Nov 23 16:09:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:35 np0005532761 lvm[265882]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:09:35 np0005532761 lvm[265882]: VG ceph_vg0 finished
Nov 23 16:09:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:35 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:35 np0005532761 cranky_swartz[265807]: {}
Nov 23 16:09:35 np0005532761 systemd[1]: libpod-fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14.scope: Deactivated successfully.
Nov 23 16:09:35 np0005532761 systemd[1]: libpod-fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14.scope: Consumed 1.057s CPU time.
Nov 23 16:09:35 np0005532761 podman[265790]: 2025-11-23 21:09:35.644228745 +0000 UTC m=+0.861496841 container died fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 16:09:35 np0005532761 systemd[1]: var-lib-containers-storage-overlay-961b6b698b67260aeadc355daf9cc266b0f9d9ff258f5f95be71570abc33c1e8-merged.mount: Deactivated successfully.
Nov 23 16:09:35 np0005532761 podman[265790]: 2025-11-23 21:09:35.681641288 +0000 UTC m=+0.898909394 container remove fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 16:09:35 np0005532761 systemd[1]: libpod-conmon-fedf998686fb6a09ab088bfa8cf01d744d4da179e54e3615a9ccc22b20f48b14.scope: Deactivated successfully.
Nov 23 16:09:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:09:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:09:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:36 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:36 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 23 16:09:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:37.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:09:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 23 16:09:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:37 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:09:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:09:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:37.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:38.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 23 16:09:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:39 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:39.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:40.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 23 16:09:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:41 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:42.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 16:09:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:43 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:43.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:44.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:09:44 np0005532761 podman[265954]: 2025-11-23 21:09:44.549750918 +0000 UTC m=+0.060768145 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 23 16:09:44 np0005532761 podman[265940]: 2025-11-23 21:09:44.576871318 +0000 UTC m=+0.090324670 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 16:09:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 16:09:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:45 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:45.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:46.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:47.148Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:09:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:47.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:09:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:09:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:09:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:09:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:47.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:09:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:09:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:48.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 93 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 511 KiB/s wr, 92 op/s
Nov 23 16:09:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:49 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:50.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 109 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 910 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Nov 23 16:09:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:51 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:09:51.866 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:09:51.867 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:09:51.867 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:51.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:52.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 109 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Nov 23 16:09:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:53.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:54.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:09:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab7c002c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:09:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:55 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:55.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:56 np0005532761 podman[266013]: 2025-11-23 21:09:56.559248194 +0000 UTC m=+0.072324202 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 16:09:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba800bf10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:09:57.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:09:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:09:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:57 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:57] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:09:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:09:57] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:09:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:57.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:09:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:09:59 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:09:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:09:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:59.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 23 16:10:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:00 np0005532761 ceph-mon[74569]: overall HEALTH_OK
Nov 23 16:10:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Nov 23 16:10:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:01 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:01.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:02 np0005532761 nova_compute[257263]: 2025-11-23 21:10:02.322 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:02 np0005532761 nova_compute[257263]: 2025-11-23 21:10:02.322 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:02.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:10:03
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.log', '.nfs', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:10:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:10:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 108 KiB/s wr, 19 op/s
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:10:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:10:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:03 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:03.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:04 np0005532761 nova_compute[257263]: 2025-11-23 21:10:04.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:04 np0005532761 nova_compute[257263]: 2025-11-23 21:10:04.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:04 np0005532761 nova_compute[257263]: 2025-11-23 21:10:04.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:04 np0005532761 nova_compute[257263]: 2025-11-23 21:10:04.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:10:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:10:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 129 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 442 KiB/s wr, 30 op/s
Nov 23 16:10:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:05 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:05.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:06 np0005532761 nova_compute[257263]: 2025-11-23 21:10:06.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:06 np0005532761 nova_compute[257263]: 2025-11-23 21:10:06.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:10:06 np0005532761 nova_compute[257263]: 2025-11-23 21:10:06.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:10:06 np0005532761 nova_compute[257263]: 2025-11-23 21:10:06.051 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:10:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:06.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:07.151Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:10:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:07.151Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:10:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:07.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:10:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 129 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 347 KiB/s wr, 12 op/s
Nov 23 16:10:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:07 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:10:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:10:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:07.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:08 np0005532761 nova_compute[257263]: 2025-11-23 21:10:08.046 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.055 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.056 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.056 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.056 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.056 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:10:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 142 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 982 KiB/s wr, 14 op/s
Nov 23 16:10:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:10:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512106924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.494 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:10:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:09 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.635 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.636 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4902MB free_disk=59.93894577026367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.636 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.636 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.721 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.721 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:10:09 np0005532761 nova_compute[257263]: 2025-11-23 21:10:09.742 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:10:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:10:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/645707027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:10:10 np0005532761 nova_compute[257263]: 2025-11-23 21:10:10.171 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:10:10 np0005532761 nova_compute[257263]: 2025-11-23 21:10:10.178 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:10:10 np0005532761 nova_compute[257263]: 2025-11-23 21:10:10.192 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:10:10 np0005532761 nova_compute[257263]: 2025-11-23 21:10:10.193 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:10:10 np0005532761 nova_compute[257263]: 2025-11-23 21:10:10.193 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:11 np0005532761 nova_compute[257263]: 2025-11-23 21:10:11.189 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:11 np0005532761 nova_compute[257263]: 2025-11-23 21:10:11.204 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:10:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:11.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:10:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:13 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:13.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:10:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 23 16:10:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:15 np0005532761 podman[266126]: 2025-11-23 21:10:15.602565646 +0000 UTC m=+0.099069873 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 23 16:10:15 np0005532761 podman[266125]: 2025-11-23 21:10:15.62304168 +0000 UTC m=+0.122767213 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:10:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:15 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:10:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:15.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:10:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:16.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:17.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:10:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:17.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:10:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:17.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:10:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 539 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Nov 23 16:10:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:17 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:10:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:10:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:10:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:10:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:10:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:10:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:18.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:18 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:10:18.393 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:10:18 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:10:18.394 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:10:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 63 op/s
Nov 23 16:10:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:10:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:10:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:20.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:20 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:10:20.397 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:10:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 862 KiB/s wr, 89 op/s
Nov 23 16:10:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:21 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:21.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:22.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:10:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:23 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:24.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:10:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 176 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 617 KiB/s wr, 85 op/s
Nov 23 16:10:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:26.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:26.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:27.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:10:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:27.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:10:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 176 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 604 KiB/s wr, 57 op/s
Nov 23 16:10:27 np0005532761 podman[266207]: 2025-11-23 21:10:27.532762645 +0000 UTC m=+0.056013999 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:10:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:27 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:27] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:10:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:27] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:10:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:28.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 181 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 916 KiB/s wr, 66 op/s
Nov 23 16:10:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:29 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:30.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:30.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 23 16:10:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:31 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:10:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:32.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:10:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:32.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:10:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:10:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:10:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:10:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:10:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:10:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:10:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:10:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:10:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:33 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:34.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:34.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:10:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 181 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 23 16:10:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:35 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:10:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.4 total, 600.0 interval#012Cumulative writes: 9526 writes, 35K keys, 9526 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9526 writes, 2376 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1542 writes, 4300 keys, 1542 commit groups, 1.0 writes per commit group, ingest: 3.82 MB, 0.01 MB/s#012Interval WAL: 1542 writes, 694 syncs, 2.22 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:10:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:36.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:36.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:10:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:37.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:10:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 181 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.6 MiB/s wr, 61 op/s
Nov 23 16:10:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:10:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:10:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:37 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:10:37 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:37 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:37] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Nov 23 16:10:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:37] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Nov 23 16:10:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:10:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:38.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:10:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:10:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 163 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Nov 23 16:10:39 np0005532761 podman[266486]: 2025-11-23 21:10:39.242500888 +0000 UTC m=+0.051965932 container create e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:10:39 np0005532761 systemd[1]: Started libpod-conmon-e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591.scope.
Nov 23 16:10:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:10:39 np0005532761 podman[266486]: 2025-11-23 21:10:39.214656223 +0000 UTC m=+0.024121287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:10:39 np0005532761 podman[266486]: 2025-11-23 21:10:39.318785971 +0000 UTC m=+0.128251045 container init e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 16:10:39 np0005532761 podman[266486]: 2025-11-23 21:10:39.324919983 +0000 UTC m=+0.134385007 container start e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 16:10:39 np0005532761 sleepy_beaver[266502]: 167 167
Nov 23 16:10:39 np0005532761 systemd[1]: libpod-e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591.scope: Deactivated successfully.
Nov 23 16:10:39 np0005532761 conmon[266502]: conmon e28f735de8b2f6396800 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591.scope/container/memory.events
Nov 23 16:10:39 np0005532761 podman[266486]: 2025-11-23 21:10:39.337203067 +0000 UTC m=+0.146668161 container attach e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:10:39 np0005532761 podman[266486]: 2025-11-23 21:10:39.337653488 +0000 UTC m=+0.147118532 container died e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_beaver, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 16:10:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c9eaa432549cddbd348fecbec46fb91e128ea49d9a1f320879a9bec965665f16-merged.mount: Deactivated successfully.
Nov 23 16:10:39 np0005532761 podman[266486]: 2025-11-23 21:10:39.380434528 +0000 UTC m=+0.189899572 container remove e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:10:39 np0005532761 systemd[1]: libpod-conmon-e28f735de8b2f6396800779b212905c1cd9bd419d982476f633447427747b591.scope: Deactivated successfully.
Nov 23 16:10:39 np0005532761 podman[266528]: 2025-11-23 21:10:39.552554839 +0000 UTC m=+0.042914824 container create 932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:10:39 np0005532761 systemd[1]: Started libpod-conmon-932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405.scope.
Nov 23 16:10:39 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:39 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:39 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:10:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:10:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15514ba55ab237f160769d7a7f41d99b01977e27b79d1e69ce66140bfff1584d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15514ba55ab237f160769d7a7f41d99b01977e27b79d1e69ce66140bfff1584d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15514ba55ab237f160769d7a7f41d99b01977e27b79d1e69ce66140bfff1584d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15514ba55ab237f160769d7a7f41d99b01977e27b79d1e69ce66140bfff1584d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15514ba55ab237f160769d7a7f41d99b01977e27b79d1e69ce66140bfff1584d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:39 np0005532761 podman[266528]: 2025-11-23 21:10:39.623975143 +0000 UTC m=+0.114335128 container init 932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_panini, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 16:10:39 np0005532761 podman[266528]: 2025-11-23 21:10:39.535740205 +0000 UTC m=+0.026100210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:10:39 np0005532761 podman[266528]: 2025-11-23 21:10:39.631172293 +0000 UTC m=+0.121532268 container start 932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 16:10:39 np0005532761 podman[266528]: 2025-11-23 21:10:39.6359782 +0000 UTC m=+0.126338175 container attach 932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:10:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:39 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:39 np0005532761 goofy_panini[266546]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:10:39 np0005532761 goofy_panini[266546]: --> All data devices are unavailable
Nov 23 16:10:39 np0005532761 systemd[1]: libpod-932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405.scope: Deactivated successfully.
Nov 23 16:10:39 np0005532761 podman[266561]: 2025-11-23 21:10:39.965307469 +0000 UTC m=+0.021436977 container died 932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 16:10:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-15514ba55ab237f160769d7a7f41d99b01977e27b79d1e69ce66140bfff1584d-merged.mount: Deactivated successfully.
Nov 23 16:10:40 np0005532761 podman[266561]: 2025-11-23 21:10:40.010002279 +0000 UTC m=+0.066131757 container remove 932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 16:10:40 np0005532761 systemd[1]: libpod-conmon-932c880927ff1f65a9a91ee06043ec1cd947344026f601dcffc1ca0896c79405.scope: Deactivated successfully.
Nov 23 16:10:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:40.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:40 np0005532761 podman[266666]: 2025-11-23 21:10:40.566789989 +0000 UTC m=+0.029033618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:10:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:41 np0005532761 podman[266666]: 2025-11-23 21:10:41.06105187 +0000 UTC m=+0.523295489 container create d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:10:41 np0005532761 systemd[1]: Started libpod-conmon-d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281.scope.
Nov 23 16:10:41 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:10:41 np0005532761 podman[266666]: 2025-11-23 21:10:41.150672544 +0000 UTC m=+0.612916153 container init d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_heyrovsky, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:10:41 np0005532761 podman[266666]: 2025-11-23 21:10:41.157466374 +0000 UTC m=+0.619709973 container start d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:10:41 np0005532761 podman[266666]: 2025-11-23 21:10:41.160548025 +0000 UTC m=+0.622791604 container attach d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 16:10:41 np0005532761 cranky_heyrovsky[266685]: 167 167
Nov 23 16:10:41 np0005532761 systemd[1]: libpod-d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281.scope: Deactivated successfully.
Nov 23 16:10:41 np0005532761 podman[266666]: 2025-11-23 21:10:41.162177368 +0000 UTC m=+0.624420957 container died d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:10:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-95830a12d5310e70b2c77a2d71a921d50ba2c2b8c1c62edb5b687e93e2dcaa7b-merged.mount: Deactivated successfully.
Nov 23 16:10:41 np0005532761 podman[266666]: 2025-11-23 21:10:41.202146212 +0000 UTC m=+0.664389801 container remove d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_heyrovsky, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:10:41 np0005532761 systemd[1]: libpod-conmon-d9cc433be3afeb3f1990b444a560604f9c2e5dd736980f0d77f11168cb2ed281.scope: Deactivated successfully.
Nov 23 16:10:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 96 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 1.3 MiB/s wr, 75 op/s
Nov 23 16:10:41 np0005532761 podman[266712]: 2025-11-23 21:10:41.390648606 +0000 UTC m=+0.049874827 container create f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 16:10:41 np0005532761 systemd[1]: Started libpod-conmon-f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94.scope.
Nov 23 16:10:41 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:10:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a83f3f1363cb2fc5939b60a647ecdbc68d3bd6dce35f56bce5b4fac6faeeab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a83f3f1363cb2fc5939b60a647ecdbc68d3bd6dce35f56bce5b4fac6faeeab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:41 np0005532761 podman[266712]: 2025-11-23 21:10:41.36537592 +0000 UTC m=+0.024602221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:10:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a83f3f1363cb2fc5939b60a647ecdbc68d3bd6dce35f56bce5b4fac6faeeab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:41 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a83f3f1363cb2fc5939b60a647ecdbc68d3bd6dce35f56bce5b4fac6faeeab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:41 np0005532761 podman[266712]: 2025-11-23 21:10:41.477072256 +0000 UTC m=+0.136298527 container init f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_vaughan, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Nov 23 16:10:41 np0005532761 podman[266712]: 2025-11-23 21:10:41.497149956 +0000 UTC m=+0.156376187 container start f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 16:10:41 np0005532761 podman[266712]: 2025-11-23 21:10:41.501052689 +0000 UTC m=+0.160278910 container attach f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_vaughan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 16:10:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:41 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]: {
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:    "1": [
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:        {
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "devices": [
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "/dev/loop3"
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            ],
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "lv_name": "ceph_lv0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "lv_size": "21470642176",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "name": "ceph_lv0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "tags": {
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.cluster_name": "ceph",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.crush_device_class": "",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.encrypted": "0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.osd_id": "1",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.type": "block",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.vdo": "0",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:                "ceph.with_tpm": "0"
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            },
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "type": "block",
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:            "vg_name": "ceph_vg0"
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:        }
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]:    ]
Nov 23 16:10:41 np0005532761 wizardly_vaughan[266729]: }
Nov 23 16:10:41 np0005532761 systemd[1]: libpod-f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94.scope: Deactivated successfully.
Nov 23 16:10:41 np0005532761 podman[266712]: 2025-11-23 21:10:41.816005108 +0000 UTC m=+0.475231349 container died f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_vaughan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:10:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d1a83f3f1363cb2fc5939b60a647ecdbc68d3bd6dce35f56bce5b4fac6faeeab-merged.mount: Deactivated successfully.
Nov 23 16:10:41 np0005532761 podman[266712]: 2025-11-23 21:10:41.85813843 +0000 UTC m=+0.517364651 container remove f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_vaughan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:10:41 np0005532761 systemd[1]: libpod-conmon-f348f7702a648382c0972babb37fa68cded671c7784bc61561ebcf7104b9cd94.scope: Deactivated successfully.
Nov 23 16:10:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:42.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:42.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:42 np0005532761 podman[266844]: 2025-11-23 21:10:42.4986633 +0000 UTC m=+0.047569426 container create 0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:10:42 np0005532761 systemd[1]: Started libpod-conmon-0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed.scope.
Nov 23 16:10:42 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:10:42 np0005532761 podman[266844]: 2025-11-23 21:10:42.478653823 +0000 UTC m=+0.027559989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:10:42 np0005532761 podman[266844]: 2025-11-23 21:10:42.736161927 +0000 UTC m=+0.285068133 container init 0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 16:10:42 np0005532761 podman[266844]: 2025-11-23 21:10:42.743222263 +0000 UTC m=+0.292128389 container start 0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:10:42 np0005532761 podman[266844]: 2025-11-23 21:10:42.747912076 +0000 UTC m=+0.296818312 container attach 0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:10:42 np0005532761 affectionate_hofstadter[266861]: 167 167
Nov 23 16:10:42 np0005532761 systemd[1]: libpod-0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed.scope: Deactivated successfully.
Nov 23 16:10:42 np0005532761 podman[266844]: 2025-11-23 21:10:42.750201097 +0000 UTC m=+0.299107223 container died 0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:10:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:42 np0005532761 systemd[1]: var-lib-containers-storage-overlay-72dea05e209877fb9b1cf0bb2c0d608fc5f22270170c13e043a5617ac6552a84-merged.mount: Deactivated successfully.
Nov 23 16:10:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:42 np0005532761 podman[266844]: 2025-11-23 21:10:42.803272497 +0000 UTC m=+0.352178623 container remove 0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 16:10:42 np0005532761 systemd[1]: libpod-conmon-0146693a2ae65f67a0e50badec283b2db8cc2c64f9e4b31504c71f82cd60d4ed.scope: Deactivated successfully.
Nov 23 16:10:43 np0005532761 podman[266886]: 2025-11-23 21:10:43.004574378 +0000 UTC m=+0.052095145 container create 479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:10:43 np0005532761 systemd[1]: Started libpod-conmon-479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0.scope.
Nov 23 16:10:43 np0005532761 podman[266886]: 2025-11-23 21:10:42.978317846 +0000 UTC m=+0.025838673 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:10:43 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:10:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf843c2111c21598e00be9d3cc5f20d6633cfb7f2f8e5fed41fdceb66a58a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf843c2111c21598e00be9d3cc5f20d6633cfb7f2f8e5fed41fdceb66a58a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf843c2111c21598e00be9d3cc5f20d6633cfb7f2f8e5fed41fdceb66a58a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:43 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdf843c2111c21598e00be9d3cc5f20d6633cfb7f2f8e5fed41fdceb66a58a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:10:43 np0005532761 podman[266886]: 2025-11-23 21:10:43.096467483 +0000 UTC m=+0.143988250 container init 479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 16:10:43 np0005532761 podman[266886]: 2025-11-23 21:10:43.106406575 +0000 UTC m=+0.153927342 container start 479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:10:43 np0005532761 podman[266886]: 2025-11-23 21:10:43.110955295 +0000 UTC m=+0.158476032 container attach 479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:10:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 96 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 30 op/s
Nov 23 16:10:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:43 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:43 np0005532761 lvm[266977]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:10:43 np0005532761 lvm[266977]: VG ceph_vg0 finished
Nov 23 16:10:43 np0005532761 eager_wescoff[266903]: {}
Nov 23 16:10:43 np0005532761 systemd[1]: libpod-479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0.scope: Deactivated successfully.
Nov 23 16:10:43 np0005532761 systemd[1]: libpod-479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0.scope: Consumed 1.065s CPU time.
Nov 23 16:10:43 np0005532761 podman[266886]: 2025-11-23 21:10:43.782576956 +0000 UTC m=+0.830097703 container died 479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:10:43 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4bdf843c2111c21598e00be9d3cc5f20d6633cfb7f2f8e5fed41fdceb66a58a5-merged.mount: Deactivated successfully.
Nov 23 16:10:43 np0005532761 podman[266886]: 2025-11-23 21:10:43.823603208 +0000 UTC m=+0.871123955 container remove 479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:10:43 np0005532761 systemd[1]: libpod-conmon-479605dea290b801c27e35c897a1fa65e0eef76a03a7f86d4dd0f4ad54e07da0.scope: Deactivated successfully.
Nov 23 16:10:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:10:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:10:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:44.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:44 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:44 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:10:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Nov 23 16:10:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:45 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:46.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:46.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:46 np0005532761 podman[267048]: 2025-11-23 21:10:46.555446456 +0000 UTC m=+0.067482942 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 16:10:46 np0005532761 podman[267047]: 2025-11-23 21:10:46.575780313 +0000 UTC m=+0.097926095 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:10:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:47.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:10:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:47.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:10:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:47.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:10:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 23 16:10:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:47] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Nov 23 16:10:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:47] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Nov 23 16:10:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:48.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:10:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:10:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:48.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 23 16:10:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:49 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:50.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:10:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:50.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:10:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Nov 23 16:10:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:51 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:10:51.867 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:10:51.868 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:10:51.868 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:52.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:52.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 26 op/s
Nov 23 16:10:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:54.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:54.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:10:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab840046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 26 op/s
Nov 23 16:10:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:55 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:56.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:10:57.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:10:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 16:10:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:57 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:57] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Nov 23 16:10:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:10:57] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Nov 23 16:10:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:10:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:58.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:58 np0005532761 podman[267108]: 2025-11-23 21:10:58.544454179 +0000 UTC m=+0.061256297 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 16:10:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab880049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:10:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:10:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:10:59 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:00.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 16:11:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:01 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900014b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:02 np0005532761 nova_compute[257263]: 2025-11-23 21:11:02.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:02.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900014b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:03 np0005532761 nova_compute[257263]: 2025-11-23 21:11:03.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:11:03
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'vms', '.mgr', 'volumes', 'backups', 'default.rgw.control', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:11:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:11:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:11:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:11:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:03 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:04 np0005532761 nova_compute[257263]: 2025-11-23 21:11:04.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:04.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:11:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900014b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:05 np0005532761 nova_compute[257263]: 2025-11-23 21:11:05.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Nov 23 16:11:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:05 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:06 np0005532761 nova_compute[257263]: 2025-11-23 21:11:06.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:06 np0005532761 nova_compute[257263]: 2025-11-23 21:11:06.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:11:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:07.160Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:11:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:07.160Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:11:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:07.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:11:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 16:11:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:07 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900014b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:11:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:11:08 np0005532761 nova_compute[257263]: 2025-11-23 21:11:08.032 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:08 np0005532761 nova_compute[257263]: 2025-11-23 21:11:08.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:08 np0005532761 nova_compute[257263]: 2025-11-23 21:11:08.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:11:08 np0005532761 nova_compute[257263]: 2025-11-23 21:11:08.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:11:08 np0005532761 nova_compute[257263]: 2025-11-23 21:11:08.047 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:11:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:08.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:08.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900014b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:11:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:09 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.062 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.062 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.062 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.062 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.063 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:10.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:10.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:11:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881988689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.517 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.675 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.676 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4880MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.676 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.677 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.728 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.728 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:11:10 np0005532761 nova_compute[257263]: 2025-11-23 21:11:10.742 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900014b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412639435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:11:11 np0005532761 nova_compute[257263]: 2025-11-23 21:11:11.158 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:11 np0005532761 nova_compute[257263]: 2025-11-23 21:11:11.163 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:11:11 np0005532761 nova_compute[257263]: 2025-11-23 21:11:11.180 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:11:11 np0005532761 nova_compute[257263]: 2025-11-23 21:11:11.181 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:11:11 np0005532761 nova_compute[257263]: 2025-11-23 21:11:11.181 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 52 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 125 KiB/s wr, 1 op/s
Nov 23 16:11:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.755032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271755102, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 251, "total_data_size": 4148635, "memory_usage": 4216416, "flush_reason": "Manual Compaction"}
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271806043, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4048390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24746, "largest_seqno": 26867, "table_properties": {"data_size": 4038870, "index_size": 5950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19848, "raw_average_key_size": 20, "raw_value_size": 4019905, "raw_average_value_size": 4118, "num_data_blocks": 261, "num_entries": 976, "num_filter_entries": 976, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932059, "oldest_key_time": 1763932059, "file_creation_time": 1763932271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 51078 microseconds, and 16078 cpu microseconds.
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.806111) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4048390 bytes OK
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.806135) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.808186) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.808201) EVENT_LOG_v1 {"time_micros": 1763932271808196, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.808220) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4139938, prev total WAL file size 4139938, number of live WAL files 2.
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.809491) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3953KB)], [56(12MB)]
Nov 23 16:11:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271809568, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17090869, "oldest_snapshot_seqno": -1}
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5844 keys, 14928297 bytes, temperature: kUnknown
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272086610, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14928297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14888222, "index_size": 24349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148647, "raw_average_key_size": 25, "raw_value_size": 14781859, "raw_average_value_size": 2529, "num_data_blocks": 994, "num_entries": 5844, "num_filter_entries": 5844, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.086954) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14928297 bytes
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.088377) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.7 rd, 53.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 6362, records dropped: 518 output_compression: NoCompression
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.088403) EVENT_LOG_v1 {"time_micros": 1763932272088391, "job": 30, "event": "compaction_finished", "compaction_time_micros": 277117, "compaction_time_cpu_micros": 57456, "output_level": 6, "num_output_files": 1, "total_output_size": 14928297, "num_input_records": 6362, "num_output_records": 5844, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272089727, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272093680, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:11.809417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.093728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.093734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.093737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.093740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:11:12.093743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:12.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:13 np0005532761 nova_compute[257263]: 2025-11-23 21:11:13.182 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 52 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 125 KiB/s wr, 1 op/s
Nov 23 16:11:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:13 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab900014b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:14.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:14.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:11:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 16:11:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:15 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:16.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:16.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:17.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:11:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 16:11:17 np0005532761 podman[267222]: 2025-11-23 21:11:17.553754873 +0000 UTC m=+0.064634236 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 16:11:17 np0005532761 podman[267221]: 2025-11-23 21:11:17.559621438 +0000 UTC m=+0.082430256 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 16:11:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:17 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:11:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:11:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:11:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:11:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:18.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 23 16:11:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 16:11:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:21 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:22.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Nov 23 16:11:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:23 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:24.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:11:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Nov 23 16:11:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:26.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:26.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:27.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:11:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:11:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:27 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:27] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 23 16:11:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:27] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 23 16:11:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:28.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:28 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 106 KiB/s wr, 93 op/s
Nov 23 16:11:29 np0005532761 podman[267305]: 2025-11-23 21:11:29.568536815 +0000 UTC m=+0.076461298 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:11:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:29 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:30.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:30.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:30 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Nov 23 16:11:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:31 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:32.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:32.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:32 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:11:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:11:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:11:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:11:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:11:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:11:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:11:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:11:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 23 16:11:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:33 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:34.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:34.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:11:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:34 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:11:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:35 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:36.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:36 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:11:36.733 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:11:36 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:11:36.734 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:11:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:36 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:37.166Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:11:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:37.166Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:11:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:11:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:37 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:37] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 23 16:11:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:37] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 23 16:11:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:38.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:38.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:38 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:11:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:39 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:40.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:40 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Nov 23 16:11:41 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:41 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:42.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:42 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 81 KiB/s wr, 11 op/s
Nov 23 16:11:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:43 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:44.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:11:44 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:11:44.737 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:11:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:11:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:44 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 83 KiB/s wr, 12 op/s
Nov 23 16:11:45 np0005532761 podman[267541]: 2025-11-23 21:11:45.421860645 +0000 UTC m=+0.043695414 container create c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:11:45 np0005532761 systemd[1]: Started libpod-conmon-c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2.scope.
Nov 23 16:11:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:45 np0005532761 podman[267541]: 2025-11-23 21:11:45.404395775 +0000 UTC m=+0.026230564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:45 np0005532761 podman[267541]: 2025-11-23 21:11:45.506111729 +0000 UTC m=+0.127946528 container init c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_almeida, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:11:45 np0005532761 podman[267541]: 2025-11-23 21:11:45.512874366 +0000 UTC m=+0.134709135 container start c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_almeida, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:11:45 np0005532761 podman[267541]: 2025-11-23 21:11:45.516313868 +0000 UTC m=+0.138148637 container attach c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_almeida, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 16:11:45 np0005532761 pedantic_almeida[267558]: 167 167
Nov 23 16:11:45 np0005532761 systemd[1]: libpod-c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2.scope: Deactivated successfully.
Nov 23 16:11:45 np0005532761 podman[267541]: 2025-11-23 21:11:45.520393225 +0000 UTC m=+0.142227984 container died c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:11:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c1e199780956aeecb3def25fad5aaaec9a5cc44bea1b80b83f8647b9f577c099-merged.mount: Deactivated successfully.
Nov 23 16:11:45 np0005532761 podman[267541]: 2025-11-23 21:11:45.563921274 +0000 UTC m=+0.185756043 container remove c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 16:11:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:45 np0005532761 systemd[1]: libpod-conmon-c8334030dc12f271c1c6a6df6b56c6ca783d7530fd90b07faf435a010cc16df2.scope: Deactivated successfully.
Nov 23 16:11:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:45 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:45 np0005532761 podman[267581]: 2025-11-23 21:11:45.729189434 +0000 UTC m=+0.046122998 container create 85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 16:11:45 np0005532761 systemd[1]: Started libpod-conmon-85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d.scope.
Nov 23 16:11:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe10c739d8d6d6d49f044126fc7e778058e2b904115abb4933ccaba034f6de5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe10c739d8d6d6d49f044126fc7e778058e2b904115abb4933ccaba034f6de5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe10c739d8d6d6d49f044126fc7e778058e2b904115abb4933ccaba034f6de5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe10c739d8d6d6d49f044126fc7e778058e2b904115abb4933ccaba034f6de5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532761 podman[267581]: 2025-11-23 21:11:45.70818305 +0000 UTC m=+0.025116644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:45 np0005532761 podman[267581]: 2025-11-23 21:11:45.80595163 +0000 UTC m=+0.122885204 container init 85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wiles, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:11:45 np0005532761 podman[267581]: 2025-11-23 21:11:45.812899853 +0000 UTC m=+0.129833417 container start 85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:11:45 np0005532761 podman[267581]: 2025-11-23 21:11:45.815631245 +0000 UTC m=+0.132564839 container attach 85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wiles, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:46.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]: [
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:    {
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "available": false,
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "being_replaced": false,
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "ceph_device_lvm": false,
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "lsm_data": {},
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "lvs": [],
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "path": "/dev/sr0",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "rejected_reasons": [
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "Insufficient space (<5GB)",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "Has a FileSystem"
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        ],
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        "sys_api": {
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "actuators": null,
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "device_nodes": [
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:                "sr0"
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            ],
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "devname": "sr0",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "human_readable_size": "482.00 KB",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "id_bus": "ata",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "model": "QEMU DVD-ROM",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "nr_requests": "2",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "parent": "/dev/sr0",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "partitions": {},
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "path": "/dev/sr0",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "removable": "1",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "rev": "2.5+",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "ro": "0",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "rotational": "1",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "sas_address": "",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "sas_device_handle": "",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "scheduler_mode": "mq-deadline",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "sectors": 0,
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "sectorsize": "2048",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "size": 493568.0,
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "support_discard": "2048",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "type": "disk",
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:            "vendor": "QEMU"
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:        }
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]:    }
Nov 23 16:11:46 np0005532761 vibrant_wiles[267598]: ]
Nov 23 16:11:46 np0005532761 systemd[1]: libpod-85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d.scope: Deactivated successfully.
Nov 23 16:11:46 np0005532761 podman[267581]: 2025-11-23 21:11:46.530892897 +0000 UTC m=+0.847826471 container died 85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:11:46 np0005532761 systemd[1]: var-lib-containers-storage-overlay-fe10c739d8d6d6d49f044126fc7e778058e2b904115abb4933ccaba034f6de5b-merged.mount: Deactivated successfully.
Nov 23 16:11:46 np0005532761 podman[267581]: 2025-11-23 21:11:46.567138683 +0000 UTC m=+0.884072247 container remove 85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wiles, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:11:46 np0005532761 systemd[1]: libpod-conmon-85582f605d7e27ed316a649380ce30a496115d4c2893900909f2a486870b029d.scope: Deactivated successfully.
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:46 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:47.167Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:11:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:47.168Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:11:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:47.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:11:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 14 KiB/s wr, 1 op/s
Nov 23 16:11:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:47 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:47] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 23 16:11:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:47] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 23 16:11:47 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:47 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:47 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:47 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:11:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:11:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:48.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:48 np0005532761 podman[268919]: 2025-11-23 21:11:48.539616325 +0000 UTC m=+0.055833354 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 16:11:48 np0005532761 podman[268918]: 2025-11-23 21:11:48.565141679 +0000 UTC m=+0.080921026 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:11:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:48 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:11:49 np0005532761 podman[269056]: 2025-11-23 21:11:49.191831464 +0000 UTC m=+0.038607650 container create 1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:11:49 np0005532761 systemd[1]: Started libpod-conmon-1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f.scope.
Nov 23 16:11:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 19 KiB/s wr, 2 op/s
Nov 23 16:11:49 np0005532761 podman[269056]: 2025-11-23 21:11:49.173634054 +0000 UTC m=+0.020410260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:49 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:49 np0005532761 podman[269056]: 2025-11-23 21:11:49.297932493 +0000 UTC m=+0.144708699 container init 1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_spence, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:11:49 np0005532761 podman[269056]: 2025-11-23 21:11:49.304181718 +0000 UTC m=+0.150957904 container start 1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 16:11:49 np0005532761 bold_spence[269073]: 167 167
Nov 23 16:11:49 np0005532761 podman[269056]: 2025-11-23 21:11:49.30805849 +0000 UTC m=+0.154834696 container attach 1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_spence, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:11:49 np0005532761 systemd[1]: libpod-1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f.scope: Deactivated successfully.
Nov 23 16:11:49 np0005532761 conmon[269073]: conmon 1825d564cec2fdb30efd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f.scope/container/memory.events
Nov 23 16:11:49 np0005532761 podman[269056]: 2025-11-23 21:11:49.31108987 +0000 UTC m=+0.157866056 container died 1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:11:49 np0005532761 systemd[1]: var-lib-containers-storage-overlay-7c97536c9636e91949437d83261b4c879f9aa9414a738a3687b08030fe8b3368-merged.mount: Deactivated successfully.
Nov 23 16:11:49 np0005532761 podman[269056]: 2025-11-23 21:11:49.345898759 +0000 UTC m=+0.192674945 container remove 1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_spence, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:49 np0005532761 systemd[1]: libpod-conmon-1825d564cec2fdb30efd44f3d2130e3f0d478a76764559ce8688b310b08cc88f.scope: Deactivated successfully.
Nov 23 16:11:49 np0005532761 podman[269098]: 2025-11-23 21:11:49.496638916 +0000 UTC m=+0.038470866 container create 94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 16:11:49 np0005532761 systemd[1]: Started libpod-conmon-94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453.scope.
Nov 23 16:11:49 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e89a764fb328fb69e1d059d05d5d59063ac84cc87d6eecd7ffc948f0d3e4c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e89a764fb328fb69e1d059d05d5d59063ac84cc87d6eecd7ffc948f0d3e4c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e89a764fb328fb69e1d059d05d5d59063ac84cc87d6eecd7ffc948f0d3e4c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e89a764fb328fb69e1d059d05d5d59063ac84cc87d6eecd7ffc948f0d3e4c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:49 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e89a764fb328fb69e1d059d05d5d59063ac84cc87d6eecd7ffc948f0d3e4c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:49 np0005532761 podman[269098]: 2025-11-23 21:11:49.574316015 +0000 UTC m=+0.116147965 container init 94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 16:11:49 np0005532761 podman[269098]: 2025-11-23 21:11:49.48124773 +0000 UTC m=+0.023079710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:49 np0005532761 podman[269098]: 2025-11-23 21:11:49.581768213 +0000 UTC m=+0.123600163 container start 94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 16:11:49 np0005532761 podman[269098]: 2025-11-23 21:11:49.584884395 +0000 UTC m=+0.126716345 container attach 94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 16:11:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:49 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:49 np0005532761 admiring_khayyam[269114]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:11:49 np0005532761 admiring_khayyam[269114]: --> All data devices are unavailable
Nov 23 16:11:49 np0005532761 systemd[1]: libpod-94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453.scope: Deactivated successfully.
Nov 23 16:11:49 np0005532761 podman[269098]: 2025-11-23 21:11:49.897869412 +0000 UTC m=+0.439701362 container died 94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:49 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f9e89a764fb328fb69e1d059d05d5d59063ac84cc87d6eecd7ffc948f0d3e4c1-merged.mount: Deactivated successfully.
Nov 23 16:11:49 np0005532761 podman[269098]: 2025-11-23 21:11:49.94020473 +0000 UTC m=+0.482036680 container remove 94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 16:11:49 np0005532761 systemd[1]: libpod-conmon-94d0e72ddc1ea629df6d48bc67cb742aea42a1606cefaf704ba9ad60863a8453.scope: Deactivated successfully.
Nov 23 16:11:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:50 np0005532761 podman[269234]: 2025-11-23 21:11:50.442842371 +0000 UTC m=+0.034939972 container create cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:11:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:50.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:50 np0005532761 systemd[1]: Started libpod-conmon-cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31.scope.
Nov 23 16:11:50 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:50 np0005532761 podman[269234]: 2025-11-23 21:11:50.427005763 +0000 UTC m=+0.019103384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:50 np0005532761 podman[269234]: 2025-11-23 21:11:50.523105869 +0000 UTC m=+0.115203470 container init cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:11:50 np0005532761 podman[269234]: 2025-11-23 21:11:50.529464006 +0000 UTC m=+0.121561607 container start cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 16:11:50 np0005532761 podman[269234]: 2025-11-23 21:11:50.532195768 +0000 UTC m=+0.124293369 container attach cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 23 16:11:50 np0005532761 lucid_mayer[269250]: 167 167
Nov 23 16:11:50 np0005532761 systemd[1]: libpod-cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31.scope: Deactivated successfully.
Nov 23 16:11:50 np0005532761 podman[269234]: 2025-11-23 21:11:50.534987942 +0000 UTC m=+0.127085543 container died cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 16:11:50 np0005532761 systemd[1]: var-lib-containers-storage-overlay-280e1b62c966fa8a9aeecc0a18ad4d21794cb1553bf2c5daaf98c03bf9474a99-merged.mount: Deactivated successfully.
Nov 23 16:11:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:50 np0005532761 podman[269234]: 2025-11-23 21:11:50.569684267 +0000 UTC m=+0.161781868 container remove cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 16:11:50 np0005532761 systemd[1]: libpod-conmon-cc9aa2eae5a49ce9766d0843601d088a0247091c4f46801955efe22412dbcb31.scope: Deactivated successfully.
Nov 23 16:11:50 np0005532761 podman[269273]: 2025-11-23 21:11:50.718492753 +0000 UTC m=+0.035195969 container create 6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cray, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 16:11:50 np0005532761 systemd[1]: Started libpod-conmon-6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188.scope.
Nov 23 16:11:50 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ca08fdd5cce200e9ab48cbf6877d676aa8c0b5924db055acb33a646cf6b376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ca08fdd5cce200e9ab48cbf6877d676aa8c0b5924db055acb33a646cf6b376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ca08fdd5cce200e9ab48cbf6877d676aa8c0b5924db055acb33a646cf6b376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:50 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ca08fdd5cce200e9ab48cbf6877d676aa8c0b5924db055acb33a646cf6b376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:50 np0005532761 podman[269273]: 2025-11-23 21:11:50.703897739 +0000 UTC m=+0.020600975 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:50 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:50 np0005532761 podman[269273]: 2025-11-23 21:11:50.915251575 +0000 UTC m=+0.231954791 container init 6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:11:50 np0005532761 podman[269273]: 2025-11-23 21:11:50.921364366 +0000 UTC m=+0.238067582 container start 6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:11:50 np0005532761 podman[269273]: 2025-11-23 21:11:50.925542646 +0000 UTC m=+0.242245862 container attach 6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 16:11:51 np0005532761 amazing_cray[269289]: {
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:    "1": [
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:        {
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "devices": [
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "/dev/loop3"
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            ],
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "lv_name": "ceph_lv0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "lv_size": "21470642176",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "name": "ceph_lv0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "tags": {
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.cluster_name": "ceph",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.crush_device_class": "",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.encrypted": "0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.osd_id": "1",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.type": "block",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.vdo": "0",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:                "ceph.with_tpm": "0"
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            },
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "type": "block",
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:            "vg_name": "ceph_vg0"
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:        }
Nov 23 16:11:51 np0005532761 amazing_cray[269289]:    ]
Nov 23 16:11:51 np0005532761 amazing_cray[269289]: }
Nov 23 16:11:51 np0005532761 systemd[1]: libpod-6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188.scope: Deactivated successfully.
Nov 23 16:11:51 np0005532761 podman[269273]: 2025-11-23 21:11:51.235300979 +0000 UTC m=+0.552004195 container died 6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 23 16:11:51 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b8ca08fdd5cce200e9ab48cbf6877d676aa8c0b5924db055acb33a646cf6b376-merged.mount: Deactivated successfully.
Nov 23 16:11:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Nov 23 16:11:51 np0005532761 podman[269273]: 2025-11-23 21:11:51.280267226 +0000 UTC m=+0.596970442 container remove 6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 16:11:51 np0005532761 systemd[1]: libpod-conmon-6d270704abaacc7ea4a703b9189ee842ddc15612a8fed589060af10ae3140188.scope: Deactivated successfully.
Nov 23 16:11:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:51 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:51 np0005532761 podman[269405]: 2025-11-23 21:11:51.796623579 +0000 UTC m=+0.037268054 container create bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:11:51 np0005532761 systemd[1]: Started libpod-conmon-bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded.scope.
Nov 23 16:11:51 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:11:51.868 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:11:51.869 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:11:51.869 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:51 np0005532761 podman[269405]: 2025-11-23 21:11:51.877470902 +0000 UTC m=+0.118115427 container init bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 16:11:51 np0005532761 podman[269405]: 2025-11-23 21:11:51.781228583 +0000 UTC m=+0.021873078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:51 np0005532761 podman[269405]: 2025-11-23 21:11:51.884022446 +0000 UTC m=+0.124666951 container start bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:51 np0005532761 musing_nightingale[269422]: 167 167
Nov 23 16:11:51 np0005532761 systemd[1]: libpod-bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded.scope: Deactivated successfully.
Nov 23 16:11:51 np0005532761 podman[269405]: 2025-11-23 21:11:51.887739123 +0000 UTC m=+0.128383648 container attach bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 23 16:11:51 np0005532761 podman[269405]: 2025-11-23 21:11:51.888013061 +0000 UTC m=+0.128657546 container died bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:51 np0005532761 systemd[1]: var-lib-containers-storage-overlay-540fdd1af81da775e62915fdb48d181de6cb5cd59e7b58f5518ec2b75cdcac8e-merged.mount: Deactivated successfully.
Nov 23 16:11:51 np0005532761 podman[269405]: 2025-11-23 21:11:51.924637817 +0000 UTC m=+0.165282302 container remove bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 16:11:51 np0005532761 systemd[1]: libpod-conmon-bedbcfdc2657f5bb91408410dafd5824adf4264b95d6abcb9696a0633a05cded.scope: Deactivated successfully.
Nov 23 16:11:52 np0005532761 podman[269448]: 2025-11-23 21:11:52.110230003 +0000 UTC m=+0.048138870 container create e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:11:52 np0005532761 systemd[1]: Started libpod-conmon-e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f.scope.
Nov 23 16:11:52 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:11:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19d22a6d11baf2bf5a29ac0bbd018ae26ab4841c1df60efbf559d8e4027d3db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19d22a6d11baf2bf5a29ac0bbd018ae26ab4841c1df60efbf559d8e4027d3db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19d22a6d11baf2bf5a29ac0bbd018ae26ab4841c1df60efbf559d8e4027d3db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19d22a6d11baf2bf5a29ac0bbd018ae26ab4841c1df60efbf559d8e4027d3db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:52 np0005532761 podman[269448]: 2025-11-23 21:11:52.174972512 +0000 UTC m=+0.112881399 container init e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 16:11:52 np0005532761 podman[269448]: 2025-11-23 21:11:52.181250407 +0000 UTC m=+0.119159274 container start e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:52 np0005532761 podman[269448]: 2025-11-23 21:11:52.092625629 +0000 UTC m=+0.030534596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:52 np0005532761 podman[269448]: 2025-11-23 21:11:52.188008306 +0000 UTC m=+0.125917193 container attach e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 16:11:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:52.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:52 np0005532761 lvm[269539]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:11:52 np0005532761 lvm[269539]: VG ceph_vg0 finished
Nov 23 16:11:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:52 np0005532761 charming_borg[269464]: {}
Nov 23 16:11:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:52 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:52 np0005532761 systemd[1]: libpod-e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f.scope: Deactivated successfully.
Nov 23 16:11:52 np0005532761 podman[269448]: 2025-11-23 21:11:52.872209948 +0000 UTC m=+0.810118815 container died e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:52 np0005532761 systemd[1]: libpod-e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f.scope: Consumed 1.048s CPU time.
Nov 23 16:11:52 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c19d22a6d11baf2bf5a29ac0bbd018ae26ab4841c1df60efbf559d8e4027d3db-merged.mount: Deactivated successfully.
Nov 23 16:11:52 np0005532761 podman[269448]: 2025-11-23 21:11:52.911756762 +0000 UTC m=+0.849665629 container remove e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:11:52 np0005532761 systemd[1]: libpod-conmon-e85768b69cc39f18f048302d3a718bc42bf2cda7d0100123940466b4b887f41f.scope: Deactivated successfully.
Nov 23 16:11:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:11:52 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:11:52 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Nov 23 16:11:53 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:53 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:53 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:54.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:54.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:11:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:54 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:11:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:55 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:56.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:56 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:11:57.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:11:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:11:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:57] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 23 16:11:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:11:57] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 23 16:11:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:57 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:58.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:11:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:58.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:58 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:11:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:11:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:11:59 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:00 np0005532761 nova_compute[257263]: 2025-11-23 21:12:00.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:00 np0005532761 nova_compute[257263]: 2025-11-23 21:12:00.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:12:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:00.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:00.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:00 np0005532761 podman[269585]: 2025-11-23 21:12:00.54088465 +0000 UTC m=+0.055988308 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 16:12:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:00 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:01 np0005532761 nova_compute[257263]: 2025-11-23 21:12:01.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 23 16:12:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:01 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:02 np0005532761 nova_compute[257263]: 2025-11-23 21:12:02.053 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:02.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:02 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:12:03
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['vms', '.nfs', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:12:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:12:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011087039908510778 of space, bias 1.0, pg target 0.33261119725532334 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:12:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:12:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:03 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:04 np0005532761 nova_compute[257263]: 2025-11-23 21:12:04.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:04.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:12:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:04 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:05 np0005532761 nova_compute[257263]: 2025-11-23 21:12:05.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:05 np0005532761 nova_compute[257263]: 2025-11-23 21:12:05.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:05 np0005532761 nova_compute[257263]: 2025-11-23 21:12:05.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:05 np0005532761 nova_compute[257263]: 2025-11-23 21:12:05.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:12:05 np0005532761 nova_compute[257263]: 2025-11-23 21:12:05.056 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:12:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 16:12:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:05 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:12:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:06.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:12:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:06 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:12:07.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:12:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 23 16:12:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:07] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:12:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:07] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:12:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:07 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:08 np0005532761 nova_compute[257263]: 2025-11-23 21:12:08.056 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:08 np0005532761 nova_compute[257263]: 2025-11-23 21:12:08.056 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:12:08 np0005532761 nova_compute[257263]: 2025-11-23 21:12:08.056 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:12:08 np0005532761 nova_compute[257263]: 2025-11-23 21:12:08.066 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:12:08 np0005532761 nova_compute[257263]: 2025-11-23 21:12:08.067 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:08 np0005532761 nova_compute[257263]: 2025-11-23 21:12:08.067 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:12:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:08.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:08 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Nov 23 16:12:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:09 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:10 np0005532761 nova_compute[257263]: 2025-11-23 21:12:10.040 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:10.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:10.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:10 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Nov 23 16:12:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:11 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.055 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.056 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.056 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.057 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.057 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:12:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:12.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:12:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2127285944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:12:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.491 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.615 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.616 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4901MB free_disk=59.9217529296875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.616 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.617 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.767 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.768 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.842 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing inventories for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:12:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:12 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.902 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating ProviderTree inventory for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.902 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating inventory in ProviderTree for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.916 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing aggregate associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.943 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing trait associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:12:12 np0005532761 nova_compute[257263]: 2025-11-23 21:12:12.957 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:12:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 67 op/s
Nov 23 16:12:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:12:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855298423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:12:13 np0005532761 nova_compute[257263]: 2025-11-23 21:12:13.374 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:12:13 np0005532761 nova_compute[257263]: 2025-11-23 21:12:13.380 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:12:13 np0005532761 nova_compute[257263]: 2025-11-23 21:12:13.392 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:12:13 np0005532761 nova_compute[257263]: 2025-11-23 21:12:13.393 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:12:13 np0005532761 nova_compute[257263]: 2025-11-23 21:12:13.393 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:12:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:13 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:13 np0005532761 nova_compute[257263]: 2025-11-23 21:12:13.945 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:14 np0005532761 nova_compute[257263]: 2025-11-23 21:12:14.049 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:14 np0005532761 nova_compute[257263]: 2025-11-23 21:12:14.061 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:14.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:14.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:12:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:14 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Nov 23 16:12:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:15 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:16.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:16.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:16 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:12:17.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:12:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:12:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:17] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:12:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:17] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:12:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:17 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:12:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:12:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:18.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:18 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab94004c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:12:19 np0005532761 podman[269695]: 2025-11-23 21:12:19.590243005 +0000 UTC m=+0.100848971 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 23 16:12:19 np0005532761 podman[269696]: 2025-11-23 21:12:19.595767781 +0000 UTC m=+0.101647393 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 16:12:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:19 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:20.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab90004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:20 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:12:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:21 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:22.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:22 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8002060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:12:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:23 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:24.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:24.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:12:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88003950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:24 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab84004b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:12:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:25 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faba8001450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:26.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:26.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab78004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:12:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[261054]: 23/11/2025 21:12:26 : epoch 69237714 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab88001cb0 fd 48 proxy ignored for local
Nov 23 16:12:26 np0005532761 kernel: ganesha.nfsd[269742]: segfault at 50 ip 00007fac619ef32e sp 00007fac18ff8210 error 4 in libntirpc.so.5.8[7fac619d4000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 23 16:12:26 np0005532761 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:12:26 np0005532761 systemd[1]: Started Process Core Dump (PID 269774/UID 0).
Nov 23 16:12:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:12:27.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:12:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Nov 23 16:12:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:27] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:27] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:27 np0005532761 systemd-coredump[269775]: Process 261058 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 86:#012#0  0x00007fac619ef32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:12:28 np0005532761 systemd[1]: systemd-coredump@10-269774-0.service: Deactivated successfully.
Nov 23 16:12:28 np0005532761 systemd[1]: systemd-coredump@10-269774-0.service: Consumed 1.039s CPU time.
Nov 23 16:12:28 np0005532761 podman[269781]: 2025-11-23 21:12:28.070168924 +0000 UTC m=+0.023396928 container died eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 16:12:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-de2127ce28d8e97f32fa63c22bedb0778b8ce3d7eb2ffb32e6379ddeb1b1031f-merged.mount: Deactivated successfully.
Nov 23 16:12:28 np0005532761 podman[269781]: 2025-11-23 21:12:28.112751018 +0000 UTC m=+0.065979012 container remove eff2315113a6db4f6b8be8135ab57661d6e6ee842ef0ac6568139bde78c8ecee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:12:28 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:12:28 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Failed with result 'exit-code'.
Nov 23 16:12:28 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.959s CPU time.
Nov 23 16:12:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:28.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:28.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Nov 23 16:12:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:30.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:30.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Nov 23 16:12:31 np0005532761 podman[269828]: 2025-11-23 21:12:31.550703866 +0000 UTC m=+0.065230952 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Nov 23 16:12:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:32.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:32.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:12:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:12:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:12:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:12:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:12:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:12:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:12:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:12:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 23 16:12:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [WARNING] 326/211233 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:12:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit[97312]: [ALERT] 326/211233 (4) : backend 'backend' has no server available!
Nov 23 16:12:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 3 op/s
Nov 23 16:12:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:36.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:12:37.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:12:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 16:12:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:37] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:37] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:12:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:38.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:12:38 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Scheduled restart job, restart counter is at 11.
Nov 23 16:12:38 np0005532761 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:12:38 np0005532761 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.2.0.compute-0.bfglcy.service: Consumed 1.959s CPU time.
Nov 23 16:12:38 np0005532761 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 16:12:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:38.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:38 np0005532761 podman[269908]: 2025-11-23 21:12:38.72217802 +0000 UTC m=+0.044012431 container create 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:12:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6f12177cd9d078fce1142172f98e907cfc03d2545c116c40eb1ee6591221068/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6f12177cd9d078fce1142172f98e907cfc03d2545c116c40eb1ee6591221068/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6f12177cd9d078fce1142172f98e907cfc03d2545c116c40eb1ee6591221068/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:38 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6f12177cd9d078fce1142172f98e907cfc03d2545c116c40eb1ee6591221068/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bfglcy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:38 np0005532761 podman[269908]: 2025-11-23 21:12:38.775119377 +0000 UTC m=+0.096953808 container init 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:12:38 np0005532761 podman[269908]: 2025-11-23 21:12:38.779653327 +0000 UTC m=+0.101487738 container start 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:12:38 np0005532761 bash[269908]: 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c
Nov 23 16:12:38 np0005532761 podman[269908]: 2025-11-23 21:12:38.705323706 +0000 UTC m=+0.027158147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:12:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 16:12:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 16:12:38 np0005532761 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bfglcy for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:12:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 16:12:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 16:12:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 16:12:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 16:12:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 16:12:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:12:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 12 KiB/s wr, 3 op/s
Nov 23 16:12:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:40.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 16:12:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:42.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 16:12:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:44.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:12:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:12:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:12:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 16 KiB/s wr, 4 op/s
Nov 23 16:12:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 23 16:12:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 23 16:12:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:46.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:46.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:12:47.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:12:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.8 KiB/s wr, 2 op/s
Nov 23 16:12:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:47] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:47] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:12:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:12:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:48.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:48.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 7.2 KiB/s wr, 50 op/s
Nov 23 16:12:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:12:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:12:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:12:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:12:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:50.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:50.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:50 np0005532761 podman[270004]: 2025-11-23 21:12:50.569666561 +0000 UTC m=+0.080613748 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 16:12:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:50 np0005532761 podman[270003]: 2025-11-23 21:12:50.594218778 +0000 UTC m=+0.105950185 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 23 16:12:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.1 KiB/s wr, 175 op/s
Nov 23 16:12:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:12:51.869 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:12:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:12:51.870 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:12:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:12:51.870 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:12:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:52.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:52.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 6.7 KiB/s wr, 175 op/s
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:12:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:54.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:12:54 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:12:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:54 np0005532761 podman[270225]: 2025-11-23 21:12:54.757366451 +0000 UTC m=+0.038779574 container create ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:12:54 np0005532761 systemd[1]: Started libpod-conmon-ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f.scope.
Nov 23 16:12:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:12:54 np0005532761 podman[270225]: 2025-11-23 21:12:54.826728611 +0000 UTC m=+0.108141754 container init ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:12:54 np0005532761 podman[270225]: 2025-11-23 21:12:54.834448465 +0000 UTC m=+0.115861588 container start ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:12:54 np0005532761 podman[270225]: 2025-11-23 21:12:54.837799463 +0000 UTC m=+0.119212646 container attach ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:12:54 np0005532761 podman[270225]: 2025-11-23 21:12:54.742711774 +0000 UTC m=+0.024124927 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:12:54 np0005532761 xenodochial_bardeen[270241]: 167 167
Nov 23 16:12:54 np0005532761 systemd[1]: libpod-ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f.scope: Deactivated successfully.
Nov 23 16:12:54 np0005532761 podman[270225]: 2025-11-23 21:12:54.840171936 +0000 UTC m=+0.121585089 container died ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bardeen, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:12:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a4f9daa38dda210f95dfaef854fa7dbeb969d3ae76a1be81921078656aa0ba25-merged.mount: Deactivated successfully.
Nov 23 16:12:54 np0005532761 podman[270225]: 2025-11-23 21:12:54.887896305 +0000 UTC m=+0.169309448 container remove ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bardeen, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 16:12:54 np0005532761 systemd[1]: libpod-conmon-ca7b819141616c8b58828e7fcecef07782a82a68783ab79b053d2c4ca158921f.scope: Deactivated successfully.
Nov 23 16:12:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:12:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:12:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:12:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:12:55 np0005532761 podman[270266]: 2025-11-23 21:12:55.032019888 +0000 UTC m=+0.040282374 container create 8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Nov 23 16:12:55 np0005532761 systemd[1]: Started libpod-conmon-8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a.scope.
Nov 23 16:12:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:12:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8489efb6ba78d058989639d7f3466af76c82b99b620a24e759f903b63948d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8489efb6ba78d058989639d7f3466af76c82b99b620a24e759f903b63948d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8489efb6ba78d058989639d7f3466af76c82b99b620a24e759f903b63948d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8489efb6ba78d058989639d7f3466af76c82b99b620a24e759f903b63948d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8489efb6ba78d058989639d7f3466af76c82b99b620a24e759f903b63948d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:55 np0005532761 podman[270266]: 2025-11-23 21:12:55.014604128 +0000 UTC m=+0.022866654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:12:55 np0005532761 podman[270266]: 2025-11-23 21:12:55.114147015 +0000 UTC m=+0.122409521 container init 8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:12:55 np0005532761 podman[270266]: 2025-11-23 21:12:55.121001285 +0000 UTC m=+0.129263771 container start 8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 16:12:55 np0005532761 podman[270266]: 2025-11-23 21:12:55.124898228 +0000 UTC m=+0.133160734 container attach 8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:12:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.1 KiB/s wr, 175 op/s
Nov 23 16:12:55 np0005532761 funny_edison[270283]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:12:55 np0005532761 funny_edison[270283]: --> All data devices are unavailable
Nov 23 16:12:55 np0005532761 systemd[1]: libpod-8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a.scope: Deactivated successfully.
Nov 23 16:12:55 np0005532761 podman[270266]: 2025-11-23 21:12:55.428409776 +0000 UTC m=+0.436672302 container died 8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 23 16:12:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8e8489efb6ba78d058989639d7f3466af76c82b99b620a24e759f903b63948d4-merged.mount: Deactivated successfully.
Nov 23 16:12:55 np0005532761 podman[270266]: 2025-11-23 21:12:55.482134583 +0000 UTC m=+0.490397069 container remove 8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_edison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:12:55 np0005532761 systemd[1]: libpod-conmon-8046801e88ff0df28ded327f1b4eca34a61fe50a63ca238aa3e6c2ec8fd0f86a.scope: Deactivated successfully.
Nov 23 16:12:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:56 np0005532761 podman[270401]: 2025-11-23 21:12:56.113880133 +0000 UTC m=+0.113576919 container create cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:12:56 np0005532761 podman[270401]: 2025-11-23 21:12:56.018624129 +0000 UTC m=+0.018320905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:12:56 np0005532761 systemd[1]: Started libpod-conmon-cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e.scope.
Nov 23 16:12:56 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:12:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:56.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:56 np0005532761 podman[270401]: 2025-11-23 21:12:56.399622392 +0000 UTC m=+0.399319178 container init cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_noether, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 16:12:56 np0005532761 podman[270401]: 2025-11-23 21:12:56.413384604 +0000 UTC m=+0.413081350 container start cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_noether, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:12:56 np0005532761 laughing_noether[270418]: 167 167
Nov 23 16:12:56 np0005532761 systemd[1]: libpod-cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e.scope: Deactivated successfully.
Nov 23 16:12:56 np0005532761 conmon[270418]: conmon cf2a4f20f18f1233042a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e.scope/container/memory.events
Nov 23 16:12:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:56 np0005532761 podman[270401]: 2025-11-23 21:12:56.574061934 +0000 UTC m=+0.573758680 container attach cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_noether, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 16:12:56 np0005532761 podman[270401]: 2025-11-23 21:12:56.574596128 +0000 UTC m=+0.574292874 container died cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_noether, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:12:56 np0005532761 systemd[1]: var-lib-containers-storage-overlay-509901891f79d6336c6938ea20a345c41b3fdde580bdb751bb6b0a39b9c1d4fa-merged.mount: Deactivated successfully.
Nov 23 16:12:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:12:57.174Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:12:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:12:57.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:12:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 4.7 KiB/s wr, 174 op/s
Nov 23 16:12:57 np0005532761 podman[270401]: 2025-11-23 21:12:57.354734722 +0000 UTC m=+1.354431478 container remove cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 23 16:12:57 np0005532761 systemd[1]: libpod-conmon-cf2a4f20f18f1233042af7f2ddb0534f91fb6b507519a79144676c362447125e.scope: Deactivated successfully.
Nov 23 16:12:57 np0005532761 podman[270446]: 2025-11-23 21:12:57.500885327 +0000 UTC m=+0.024698062 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:12:57 np0005532761 podman[270446]: 2025-11-23 21:12:57.665915162 +0000 UTC m=+0.189727897 container create 2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_clarke, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:12:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:57] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:12:57] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:12:57 np0005532761 systemd[1]: Started libpod-conmon-2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f.scope.
Nov 23 16:12:57 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:12:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085a8dcd10bbf9360b17fc4f6ad422bb60ee5c696e41927054af4a0b913a4a38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085a8dcd10bbf9360b17fc4f6ad422bb60ee5c696e41927054af4a0b913a4a38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085a8dcd10bbf9360b17fc4f6ad422bb60ee5c696e41927054af4a0b913a4a38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:57 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085a8dcd10bbf9360b17fc4f6ad422bb60ee5c696e41927054af4a0b913a4a38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:57 np0005532761 podman[270446]: 2025-11-23 21:12:57.928022457 +0000 UTC m=+0.451835232 container init 2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_clarke, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:12:57 np0005532761 podman[270446]: 2025-11-23 21:12:57.93454964 +0000 UTC m=+0.458362355 container start 2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_clarke, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:12:57 np0005532761 podman[270446]: 2025-11-23 21:12:57.938432963 +0000 UTC m=+0.462245728 container attach 2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]: {
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:    "1": [
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:        {
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "devices": [
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "/dev/loop3"
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            ],
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "lv_name": "ceph_lv0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "lv_size": "21470642176",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "name": "ceph_lv0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "tags": {
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.cluster_name": "ceph",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.crush_device_class": "",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.encrypted": "0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.osd_id": "1",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.type": "block",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.vdo": "0",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:                "ceph.with_tpm": "0"
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            },
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "type": "block",
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:            "vg_name": "ceph_vg0"
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:        }
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]:    ]
Nov 23 16:12:58 np0005532761 mystifying_clarke[270462]: }
Nov 23 16:12:58 np0005532761 systemd[1]: libpod-2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f.scope: Deactivated successfully.
Nov 23 16:12:58 np0005532761 podman[270446]: 2025-11-23 21:12:58.230685523 +0000 UTC m=+0.754498228 container died 2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 16:12:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-085a8dcd10bbf9360b17fc4f6ad422bb60ee5c696e41927054af4a0b913a4a38-merged.mount: Deactivated successfully.
Nov 23 16:12:58 np0005532761 podman[270446]: 2025-11-23 21:12:58.280263352 +0000 UTC m=+0.804076067 container remove 2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_clarke, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:12:58 np0005532761 systemd[1]: libpod-conmon-2b9cf5a6f5261e11f2b227215fbeb2ef3b5032544145c5c5bfde8efc79624f1f.scope: Deactivated successfully.
Nov 23 16:12:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:58.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:12:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:58.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:58 np0005532761 podman[270575]: 2025-11-23 21:12:58.871599523 +0000 UTC m=+0.071430105 container create 9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 16:12:58 np0005532761 podman[270575]: 2025-11-23 21:12:58.825387724 +0000 UTC m=+0.025218326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:12:58 np0005532761 systemd[1]: Started libpod-conmon-9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f.scope.
Nov 23 16:12:58 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:12:58 np0005532761 podman[270575]: 2025-11-23 21:12:58.978137724 +0000 UTC m=+0.177968336 container init 9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:12:58 np0005532761 podman[270575]: 2025-11-23 21:12:58.988720494 +0000 UTC m=+0.188551076 container start 9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 23 16:12:58 np0005532761 podman[270575]: 2025-11-23 21:12:58.991441906 +0000 UTC m=+0.191272478 container attach 9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 16:12:58 np0005532761 quizzical_pike[270592]: 167 167
Nov 23 16:12:58 np0005532761 systemd[1]: libpod-9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f.scope: Deactivated successfully.
Nov 23 16:12:58 np0005532761 podman[270575]: 2025-11-23 21:12:58.994561368 +0000 UTC m=+0.194391990 container died 9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 16:12:59 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9aadedbb21fcbfdbbeb65a68154ffbb5393f117cba1d95efed97201d400ade29-merged.mount: Deactivated successfully.
Nov 23 16:12:59 np0005532761 podman[270575]: 2025-11-23 21:12:59.03783615 +0000 UTC m=+0.237666732 container remove 9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 23 16:12:59 np0005532761 systemd[1]: libpod-conmon-9c1997886de82ac4a4d8ddf3c099b5ccdc7c838c42aa7cceb0aa852ac2b6bc2f.scope: Deactivated successfully.
Nov 23 16:12:59 np0005532761 podman[270617]: 2025-11-23 21:12:59.198141779 +0000 UTC m=+0.041897037 container create 02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 16:12:59 np0005532761 systemd[1]: Started libpod-conmon-02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e.scope.
Nov 23 16:12:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:12:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ece2122f2a9ccc15b3a9b524ea8d7ee37db390867f81d1e9e9c9c60559c68d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ece2122f2a9ccc15b3a9b524ea8d7ee37db390867f81d1e9e9c9c60559c68d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ece2122f2a9ccc15b3a9b524ea8d7ee37db390867f81d1e9e9c9c60559c68d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:59 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ece2122f2a9ccc15b3a9b524ea8d7ee37db390867f81d1e9e9c9c60559c68d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:12:59 np0005532761 podman[270617]: 2025-11-23 21:12:59.181431898 +0000 UTC m=+0.025187216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:12:59 np0005532761 podman[270617]: 2025-11-23 21:12:59.282623048 +0000 UTC m=+0.126378306 container init 02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_jackson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:12:59 np0005532761 podman[270617]: 2025-11-23 21:12:59.288713979 +0000 UTC m=+0.132469247 container start 02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:12:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 14 KiB/s wr, 176 op/s
Nov 23 16:12:59 np0005532761 podman[270617]: 2025-11-23 21:12:59.291951154 +0000 UTC m=+0.135706422 container attach 02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_jackson, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:12:59 np0005532761 lvm[270708]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:12:59 np0005532761 lvm[270708]: VG ceph_vg0 finished
Nov 23 16:12:59 np0005532761 charming_jackson[270634]: {}
Nov 23 16:12:59 np0005532761 systemd[1]: libpod-02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e.scope: Deactivated successfully.
Nov 23 16:12:59 np0005532761 podman[270617]: 2025-11-23 21:12:59.990480484 +0000 UTC m=+0.834235762 container died 02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 16:12:59 np0005532761 systemd[1]: libpod-02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e.scope: Consumed 1.117s CPU time.
Nov 23 16:13:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:12:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:00 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2ece2122f2a9ccc15b3a9b524ea8d7ee37db390867f81d1e9e9c9c60559c68d3-merged.mount: Deactivated successfully.
Nov 23 16:13:00 np0005532761 podman[270617]: 2025-11-23 21:13:00.032991626 +0000 UTC m=+0.876746894 container remove 02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_jackson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 23 16:13:00 np0005532761 systemd[1]: libpod-conmon-02a1af18f277f978b1924a26e7e70d24d9b84eb837f87667ccf88dd92423d43e.scope: Deactivated successfully.
Nov 23 16:13:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:13:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:13:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:13:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:13:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:00.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:00 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:13:00 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:13:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 12 KiB/s wr, 128 op/s
Nov 23 16:13:02 np0005532761 nova_compute[257263]: 2025-11-23 21:13:02.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:02.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:02 np0005532761 podman[270749]: 2025-11-23 21:13:02.534519358 +0000 UTC m=+0.053732748 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 16:13:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:13:03
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'vms', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.nfs']
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:13:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:13:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 11 KiB/s wr, 2 op/s
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015224887266393175 of space, bias 1.0, pg target 0.4567466179917953 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:13:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:13:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:04.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 13 KiB/s wr, 3 op/s
Nov 23 16:13:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:06 np0005532761 nova_compute[257263]: 2025-11-23 21:13:06.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:06 np0005532761 nova_compute[257263]: 2025-11-23 21:13:06.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:06 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:13:06.488 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:13:06 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:13:06.489 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:13:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:06.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:07 np0005532761 nova_compute[257263]: 2025-11-23 21:13:07.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:07.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:13:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 16:13:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:07] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Nov 23 16:13:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:07] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Nov 23 16:13:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:13:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656138691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:13:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:13:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656138691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:13:08 np0005532761 nova_compute[257263]: 2025-11-23 21:13:08.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:08 np0005532761 nova_compute[257263]: 2025-11-23 21:13:08.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:13:08 np0005532761 nova_compute[257263]: 2025-11-23 21:13:08.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:13:08 np0005532761 nova_compute[257263]: 2025-11-23 21:13:08.044 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:13:08 np0005532761 nova_compute[257263]: 2025-11-23 21:13:08.044 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:08 np0005532761 nova_compute[257263]: 2025-11-23 21:13:08.044 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:13:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:08.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 183 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 12 op/s
Nov 23 16:13:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:10 np0005532761 nova_compute[257263]: 2025-11-23 21:13:10.039 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:10.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.8 KiB/s wr, 29 op/s
Nov 23 16:13:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.052 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.053 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.053 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.054 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.054 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:13:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:13:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:13:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935920564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.526 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:14.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.665 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.666 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4880MB free_disk=59.942466735839844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.666 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.666 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.739 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.739 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:13:14 np0005532761 nova_compute[257263]: 2025-11-23 21:13:14.760 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:13:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611156026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:13:15 np0005532761 nova_compute[257263]: 2025-11-23 21:13:15.258 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:15 np0005532761 nova_compute[257263]: 2025-11-23 21:13:15.264 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:13:15 np0005532761 nova_compute[257263]: 2025-11-23 21:13:15.289 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:13:15 np0005532761 nova_compute[257263]: 2025-11-23 21:13:15.292 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:13:15 np0005532761 nova_compute[257263]: 2025-11-23 21:13:15.292 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 23 16:13:15 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:13:15.491 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:16.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:16.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:17.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:13:17 np0005532761 nova_compute[257263]: 2025-11-23 21:13:17.293 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Nov 23 16:13:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:17] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Nov 23 16:13:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:17] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Nov 23 16:13:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:13:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:13:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:18.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 57 op/s
Nov 23 16:13:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:20.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:20.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 48 op/s
Nov 23 16:13:21 np0005532761 podman[270859]: 2025-11-23 21:13:21.407093449 +0000 UTC m=+0.078843270 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 16:13:21 np0005532761 podman[270860]: 2025-11-23 21:13:21.408864777 +0000 UTC m=+0.077391334 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:13:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 16:13:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:24.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:24.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 16:13:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:26.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:26.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:27.177Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:13:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:27.177Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:13:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:27.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:13:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:13:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:13:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:13:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 16:13:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:30.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:13:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:32.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:32.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:13:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:13:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:13:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:13:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:13:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:13:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:13:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:13:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:13:33 np0005532761 podman[270942]: 2025-11-23 21:13:33.544687382 +0000 UTC m=+0.064289568 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:13:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:34.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:34.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 43 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 92 KiB/s wr, 2 op/s
Nov 23 16:13:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:36.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:36.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:37.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:13:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:37.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:13:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:37.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:13:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 43 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 92 KiB/s wr, 1 op/s
Nov 23 16:13:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:13:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:13:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:38.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:38.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 51 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 372 KiB/s wr, 5 op/s
Nov 23 16:13:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:40.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:40.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.610403) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420610439, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1582, "num_deletes": 257, "total_data_size": 3023218, "memory_usage": 3053936, "flush_reason": "Manual Compaction"}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420629776, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2940573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26868, "largest_seqno": 28449, "table_properties": {"data_size": 2933288, "index_size": 4228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15040, "raw_average_key_size": 19, "raw_value_size": 2918711, "raw_average_value_size": 3790, "num_data_blocks": 186, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932273, "oldest_key_time": 1763932273, "file_creation_time": 1763932420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 19445 microseconds, and 6152 cpu microseconds.
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.629844) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2940573 bytes OK
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.629864) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.631262) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.631276) EVENT_LOG_v1 {"time_micros": 1763932420631272, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.631294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3016447, prev total WAL file size 3016447, number of live WAL files 2.
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2871KB)], [59(14MB)]
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420632244, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17868870, "oldest_snapshot_seqno": -1}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6082 keys, 17720273 bytes, temperature: kUnknown
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420748060, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17720273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17675884, "index_size": 28087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154722, "raw_average_key_size": 25, "raw_value_size": 17562761, "raw_average_value_size": 2887, "num_data_blocks": 1153, "num_entries": 6082, "num_filter_entries": 6082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.748302) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17720273 bytes
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749577) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.2 rd, 152.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 14.2 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 6614, records dropped: 532 output_compression: NoCompression
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749593) EVENT_LOG_v1 {"time_micros": 1763932420749586, "job": 32, "event": "compaction_finished", "compaction_time_micros": 115896, "compaction_time_cpu_micros": 36509, "output_level": 6, "num_output_files": 1, "total_output_size": 17720273, "num_input_records": 6614, "num_output_records": 6082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420750344, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420752879, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.752986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.752991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.752992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.752999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:13:40.753000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 23 16:13:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:42.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:42.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 23 16:13:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:13:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:13:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:44.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 23 16:13:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:46.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:46.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:47.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:13:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 126 op/s
Nov 23 16:13:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:13:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:13:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:13:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:13:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:48.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:48.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 127 op/s
Nov 23 16:13:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:50.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:50.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 124 op/s
Nov 23 16:13:51 np0005532761 podman[271007]: 2025-11-23 21:13:51.517672519 +0000 UTC m=+0.042560035 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 23 16:13:51 np0005532761 podman[271006]: 2025-11-23 21:13:51.542462462 +0000 UTC m=+0.069140135 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 23 16:13:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:13:51.871 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:13:51.871 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:13:51.871 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:52.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:52.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 23 16:13:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:54.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 23 16:13:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:56.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:13:57.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:13:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:13:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:13:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:13:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:13:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:58.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:13:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:58.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:13:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:13:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:13:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:13:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:13:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 16:14:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:00.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:00.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:14:01 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:14:02 np0005532761 nova_compute[257263]: 2025-11-23 21:14:02.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:14:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:02.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:14:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:02.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:14:02 np0005532761 podman[271229]: 2025-11-23 21:14:02.647633528 +0000 UTC m=+0.029335145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:14:02 np0005532761 podman[271229]: 2025-11-23 21:14:02.804257941 +0000 UTC m=+0.185959478 container create ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:02 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:14:02 np0005532761 systemd[1]: Started libpod-conmon-ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df.scope.
Nov 23 16:14:02 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:14:02 np0005532761 podman[271229]: 2025-11-23 21:14:02.92931289 +0000 UTC m=+0.311014447 container init ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:14:02 np0005532761 podman[271229]: 2025-11-23 21:14:02.936498309 +0000 UTC m=+0.318199846 container start ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:14:02 np0005532761 vibrant_shannon[271246]: 167 167
Nov 23 16:14:02 np0005532761 systemd[1]: libpod-ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df.scope: Deactivated successfully.
Nov 23 16:14:02 np0005532761 conmon[271246]: conmon ca2f3d85707cb6726674 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df.scope/container/memory.events
Nov 23 16:14:02 np0005532761 podman[271229]: 2025-11-23 21:14:02.983197411 +0000 UTC m=+0.364898978 container attach ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Nov 23 16:14:02 np0005532761 podman[271229]: 2025-11-23 21:14:02.984707332 +0000 UTC m=+0.366408929 container died ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:14:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:03 np0005532761 systemd[1]: var-lib-containers-storage-overlay-7ac6d9700417cd11789d4086e07336045159c1057b995abea5e80fa60bb25940-merged.mount: Deactivated successfully.
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:14:03
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.nfs', 'default.rgw.meta', 'images']
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:14:03 np0005532761 podman[271229]: 2025-11-23 21:14:03.177233981 +0000 UTC m=+0.558935528 container remove ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 16:14:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:14:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:14:03 np0005532761 systemd[1]: libpod-conmon-ca2f3d85707cb67266743064bc0a841828657af2396100bbedd51a4bcf57a1df.scope: Deactivated successfully.
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 23 16:14:03 np0005532761 podman[271272]: 2025-11-23 21:14:03.367670665 +0000 UTC m=+0.054724735 container create a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meninsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:14:03 np0005532761 systemd[1]: Started libpod-conmon-a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78.scope.
Nov 23 16:14:03 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:14:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4fae2c1c13c86f5f6ca42b73e035ed7aec19b9276e1070d6a49933b67f03c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4fae2c1c13c86f5f6ca42b73e035ed7aec19b9276e1070d6a49933b67f03c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4fae2c1c13c86f5f6ca42b73e035ed7aec19b9276e1070d6a49933b67f03c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4fae2c1c13c86f5f6ca42b73e035ed7aec19b9276e1070d6a49933b67f03c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:03 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4fae2c1c13c86f5f6ca42b73e035ed7aec19b9276e1070d6a49933b67f03c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:03 np0005532761 podman[271272]: 2025-11-23 21:14:03.440103826 +0000 UTC m=+0.127157896 container init a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meninsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 23 16:14:03 np0005532761 podman[271272]: 2025-11-23 21:14:03.352825513 +0000 UTC m=+0.039879603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:14:03 np0005532761 podman[271272]: 2025-11-23 21:14:03.450396568 +0000 UTC m=+0.137450648 container start a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 16:14:03 np0005532761 podman[271272]: 2025-11-23 21:14:03.453481599 +0000 UTC m=+0.140535689 container attach a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:14:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:14:03 np0005532761 romantic_meninsky[271288]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:14:03 np0005532761 romantic_meninsky[271288]: --> All data devices are unavailable
Nov 23 16:14:03 np0005532761 systemd[1]: libpod-a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78.scope: Deactivated successfully.
Nov 23 16:14:03 np0005532761 podman[271272]: 2025-11-23 21:14:03.762944424 +0000 UTC m=+0.449998514 container died a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Nov 23 16:14:03 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1a4fae2c1c13c86f5f6ca42b73e035ed7aec19b9276e1070d6a49933b67f03c0-merged.mount: Deactivated successfully.
Nov 23 16:14:03 np0005532761 podman[271272]: 2025-11-23 21:14:03.831136583 +0000 UTC m=+0.518190683 container remove a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 16:14:03 np0005532761 systemd[1]: libpod-conmon-a134f498bfbd91104f2ec56db38894996b2716b1b00975679a18ee5e6ca60b78.scope: Deactivated successfully.
Nov 23 16:14:03 np0005532761 podman[271303]: 2025-11-23 21:14:03.889714079 +0000 UTC m=+0.096720993 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:14:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:14:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:04.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:14:04 np0005532761 podman[271429]: 2025-11-23 21:14:04.463199161 +0000 UTC m=+0.052285092 container create 1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:14:04 np0005532761 systemd[1]: Started libpod-conmon-1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99.scope.
Nov 23 16:14:04 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:14:04 np0005532761 podman[271429]: 2025-11-23 21:14:04.439194566 +0000 UTC m=+0.028280537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:14:04 np0005532761 podman[271429]: 2025-11-23 21:14:04.543326205 +0000 UTC m=+0.132412156 container init 1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 16:14:04 np0005532761 podman[271429]: 2025-11-23 21:14:04.553885163 +0000 UTC m=+0.142971104 container start 1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:14:04 np0005532761 podman[271429]: 2025-11-23 21:14:04.557443677 +0000 UTC m=+0.146529628 container attach 1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:14:04 np0005532761 jovial_raman[271446]: 167 167
Nov 23 16:14:04 np0005532761 systemd[1]: libpod-1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99.scope: Deactivated successfully.
Nov 23 16:14:04 np0005532761 podman[271429]: 2025-11-23 21:14:04.56097587 +0000 UTC m=+0.150061831 container died 1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:14:04 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f241c7d161411856e7cf02fdeb4ea3fcd4243a7ad78f5c1990a9f92f1e8947db-merged.mount: Deactivated successfully.
Nov 23 16:14:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:04.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:04 np0005532761 podman[271429]: 2025-11-23 21:14:04.609592443 +0000 UTC m=+0.198678374 container remove 1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:14:04 np0005532761 systemd[1]: libpod-conmon-1e8ee21fa4c7657196e6e05135df66675f6f8fea3c79519400fb2a9231717f99.scope: Deactivated successfully.
Nov 23 16:14:04 np0005532761 podman[271470]: 2025-11-23 21:14:04.781339044 +0000 UTC m=+0.049140067 container create 21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 16:14:04 np0005532761 systemd[1]: Started libpod-conmon-21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab.scope.
Nov 23 16:14:04 np0005532761 podman[271470]: 2025-11-23 21:14:04.759108678 +0000 UTC m=+0.026909721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:14:04 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:14:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f643a23d22e4f557137f4b8ffb52b75e69f886e5d4ff3ab6e1a301a453f9d7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f643a23d22e4f557137f4b8ffb52b75e69f886e5d4ff3ab6e1a301a453f9d7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f643a23d22e4f557137f4b8ffb52b75e69f886e5d4ff3ab6e1a301a453f9d7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f643a23d22e4f557137f4b8ffb52b75e69f886e5d4ff3ab6e1a301a453f9d7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:04 np0005532761 podman[271470]: 2025-11-23 21:14:04.889656822 +0000 UTC m=+0.157457865 container init 21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jang, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 16:14:04 np0005532761 podman[271470]: 2025-11-23 21:14:04.903531668 +0000 UTC m=+0.171332691 container start 21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jang, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:14:04 np0005532761 podman[271470]: 2025-11-23 21:14:04.907882513 +0000 UTC m=+0.175683566 container attach 21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 16:14:05 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 16:14:05 np0005532761 admiring_jang[271486]: {
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:    "1": [
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:        {
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "devices": [
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "/dev/loop3"
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            ],
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "lv_name": "ceph_lv0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "lv_size": "21470642176",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "name": "ceph_lv0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "tags": {
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.cluster_name": "ceph",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.crush_device_class": "",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.encrypted": "0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.osd_id": "1",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.type": "block",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.vdo": "0",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:                "ceph.with_tpm": "0"
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            },
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "type": "block",
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:            "vg_name": "ceph_vg0"
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:        }
Nov 23 16:14:05 np0005532761 admiring_jang[271486]:    ]
Nov 23 16:14:05 np0005532761 admiring_jang[271486]: }
Nov 23 16:14:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 23 16:14:05 np0005532761 systemd[1]: libpod-21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab.scope: Deactivated successfully.
Nov 23 16:14:05 np0005532761 podman[271470]: 2025-11-23 21:14:05.314728087 +0000 UTC m=+0.582529110 container died 21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:14:05 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5f643a23d22e4f557137f4b8ffb52b75e69f886e5d4ff3ab6e1a301a453f9d7d-merged.mount: Deactivated successfully.
Nov 23 16:14:05 np0005532761 podman[271470]: 2025-11-23 21:14:05.361621004 +0000 UTC m=+0.629422047 container remove 21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:14:05 np0005532761 systemd[1]: libpod-conmon-21854a7e6a7acd32d0af0f64500013ad96ab1220deee04c982514c3efafdd9ab.scope: Deactivated successfully.
Nov 23 16:14:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:05 np0005532761 podman[271599]: 2025-11-23 21:14:05.911297677 +0000 UTC m=+0.042463882 container create a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 16:14:05 np0005532761 systemd[1]: Started libpod-conmon-a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284.scope.
Nov 23 16:14:05 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:14:05 np0005532761 podman[271599]: 2025-11-23 21:14:05.892101701 +0000 UTC m=+0.023268196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:14:05 np0005532761 podman[271599]: 2025-11-23 21:14:05.996083694 +0000 UTC m=+0.127249949 container init a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bardeen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:14:06 np0005532761 podman[271599]: 2025-11-23 21:14:06.001920168 +0000 UTC m=+0.133086383 container start a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bardeen, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 16:14:06 np0005532761 podman[271599]: 2025-11-23 21:14:06.005265757 +0000 UTC m=+0.136432042 container attach a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 16:14:06 np0005532761 clever_bardeen[271640]: 167 167
Nov 23 16:14:06 np0005532761 systemd[1]: libpod-a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284.scope: Deactivated successfully.
Nov 23 16:14:06 np0005532761 podman[271599]: 2025-11-23 21:14:06.007006602 +0000 UTC m=+0.138172817 container died a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bardeen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 16:14:06 np0005532761 systemd[1]: var-lib-containers-storage-overlay-92cbb527935b48b7cf4c62bbac979f2e32bab4a7f2fb2da6ede9593aa7ad7fe3-merged.mount: Deactivated successfully.
Nov 23 16:14:06 np0005532761 podman[271599]: 2025-11-23 21:14:06.042434387 +0000 UTC m=+0.173600592 container remove a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bardeen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:14:06 np0005532761 systemd[1]: libpod-conmon-a5092bf7d1649ce543d98f8084acd04f548ba0c5ce48c1ba495ea839e6044284.scope: Deactivated successfully.
Nov 23 16:14:06 np0005532761 podman[271665]: 2025-11-23 21:14:06.209991418 +0000 UTC m=+0.038685212 container create 31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:14:06 np0005532761 systemd[1]: Started libpod-conmon-31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45.scope.
Nov 23 16:14:06 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:14:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9271d61d866754e2e89c70667c819b0a16410612beeede4d43993abf6332a1e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9271d61d866754e2e89c70667c819b0a16410612beeede4d43993abf6332a1e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9271d61d866754e2e89c70667c819b0a16410612beeede4d43993abf6332a1e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:06 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9271d61d866754e2e89c70667c819b0a16410612beeede4d43993abf6332a1e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:06 np0005532761 podman[271665]: 2025-11-23 21:14:06.194365186 +0000 UTC m=+0.023059010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:14:06 np0005532761 podman[271665]: 2025-11-23 21:14:06.293218244 +0000 UTC m=+0.121912068 container init 31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 16:14:06 np0005532761 podman[271665]: 2025-11-23 21:14:06.303513596 +0000 UTC m=+0.132207400 container start 31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 16:14:06 np0005532761 podman[271665]: 2025-11-23 21:14:06.307060679 +0000 UTC m=+0.135754483 container attach 31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:14:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:06.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:06 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:14:06.752 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:14:06 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:14:06.755 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:14:06 np0005532761 lvm[271756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:14:06 np0005532761 lvm[271756]: VG ceph_vg0 finished
Nov 23 16:14:06 np0005532761 mystifying_carson[271681]: {}
Nov 23 16:14:07 np0005532761 systemd[1]: libpod-31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45.scope: Deactivated successfully.
Nov 23 16:14:07 np0005532761 podman[271665]: 2025-11-23 21:14:07.009311068 +0000 UTC m=+0.838004872 container died 31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:14:07 np0005532761 systemd[1]: libpod-31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45.scope: Consumed 1.109s CPU time.
Nov 23 16:14:07 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9271d61d866754e2e89c70667c819b0a16410612beeede4d43993abf6332a1e5-merged.mount: Deactivated successfully.
Nov 23 16:14:07 np0005532761 nova_compute[257263]: 2025-11-23 21:14:07.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:07 np0005532761 podman[271665]: 2025-11-23 21:14:07.056934344 +0000 UTC m=+0.885628148 container remove 31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:14:07 np0005532761 systemd[1]: libpod-conmon-31704a7b9640c4627ae1bf76fbd0cc58a9446c8000f58fd501a4711e2e589c45.scope: Deactivated successfully.
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:07.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:14:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:07.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:14:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 23 16:14:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:14:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787009518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:14:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787009518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:14:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:08 np0005532761 nova_compute[257263]: 2025-11-23 21:14:08.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:08 np0005532761 nova_compute[257263]: 2025-11-23 21:14:08.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:14:08 np0005532761 nova_compute[257263]: 2025-11-23 21:14:08.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:14:08 np0005532761 nova_compute[257263]: 2025-11-23 21:14:08.047 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:14:08 np0005532761 nova_compute[257263]: 2025-11-23 21:14:08.048 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:08.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:08.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:09 np0005532761 nova_compute[257263]: 2025-11-23 21:14:09.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:09 np0005532761 nova_compute[257263]: 2025-11-23 21:14:09.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:09 np0005532761 nova_compute[257263]: 2025-11-23 21:14:09.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:14:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 23 16:14:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:10.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:10.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:10 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:14:10.758 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:11 np0005532761 nova_compute[257263]: 2025-11-23 21:14:11.029 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 91 op/s
Nov 23 16:14:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:12.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:12.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 23 16:14:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:14.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 23 16:14:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.059 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.059 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.060 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.060 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.060 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:14:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470766541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.518 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:16.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.653 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.654 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4886MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.654 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.655 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.733 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.733 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:14:16 np0005532761 nova_compute[257263]: 2025-11-23 21:14:16.760 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:17.182Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:14:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:17.182Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:14:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:17.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:14:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:14:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2177938279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:14:17 np0005532761 nova_compute[257263]: 2025-11-23 21:14:17.232 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:17 np0005532761 nova_compute[257263]: 2025-11-23 21:14:17.237 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:14:17 np0005532761 nova_compute[257263]: 2025-11-23 21:14:17.261 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:14:17 np0005532761 nova_compute[257263]: 2025-11-23 21:14:17.262 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:14:17 np0005532761 nova_compute[257263]: 2025-11-23 21:14:17.263 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:14:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:14:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:14:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:18 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:14:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:14:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:19 np0005532761 nova_compute[257263]: 2025-11-23 21:14:19.263 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:19 np0005532761 nova_compute[257263]: 2025-11-23 21:14:19.280 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:14:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:20.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:14:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:22.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:22 np0005532761 podman[271859]: 2025-11-23 21:14:22.540714628 +0000 UTC m=+0.058250809 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 16:14:22 np0005532761 podman[271858]: 2025-11-23 21:14:22.559630236 +0000 UTC m=+0.076285553 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 16:14:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:22.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:14:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:24.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:14:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:26.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=cleanup t=2025-11-23T21:14:27.149350273Z level=info msg="Completed cleanup jobs" duration=5.413723ms
Nov 23 16:14:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:27.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:14:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafana.update.checker t=2025-11-23T21:14:27.264529143Z level=info msg="Update check succeeded" duration=57.696403ms
Nov 23 16:14:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=plugins.update.checker t=2025-11-23T21:14:27.271015013Z level=info msg="Update check succeeded" duration=68.283822ms
Nov 23 16:14:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:14:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:14:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:14:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:28.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:28.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:14:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:30.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:30.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:14:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:32.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:14:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:14:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:14:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:14:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:14:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:14:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:14:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:14:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:14:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:34.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:34 np0005532761 podman[271943]: 2025-11-23 21:14:34.538558994 +0000 UTC m=+0.057950801 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 16:14:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:34.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:14:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:36.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:36.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:37.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:14:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:14:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:14:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:14:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:38.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:38.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.402854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479402913, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1038, "num_deletes": 503, "total_data_size": 1165394, "memory_usage": 1190096, "flush_reason": "Manual Compaction"}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479410585, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1006534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28451, "largest_seqno": 29487, "table_properties": {"data_size": 1002115, "index_size": 1559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13867, "raw_average_key_size": 19, "raw_value_size": 991050, "raw_average_value_size": 1393, "num_data_blocks": 68, "num_entries": 711, "num_filter_entries": 711, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932421, "oldest_key_time": 1763932421, "file_creation_time": 1763932479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 7767 microseconds, and 3787 cpu microseconds.
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.410630) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1006534 bytes OK
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.410647) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.414699) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.414728) EVENT_LOG_v1 {"time_micros": 1763932479414719, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.414756) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1159461, prev total WAL file size 1159461, number of live WAL files 2.
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.415644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(982KB)], [62(16MB)]
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479415671, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18726807, "oldest_snapshot_seqno": -1}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5781 keys, 12640164 bytes, temperature: kUnknown
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479609301, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12640164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12603430, "index_size": 21200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149634, "raw_average_key_size": 25, "raw_value_size": 12501077, "raw_average_value_size": 2162, "num_data_blocks": 849, "num_entries": 5781, "num_filter_entries": 5781, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.609972) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12640164 bytes
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.611853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.5 rd, 65.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.9 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(31.2) write-amplify(12.6) OK, records in: 6793, records dropped: 1012 output_compression: NoCompression
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.611892) EVENT_LOG_v1 {"time_micros": 1763932479611874, "job": 34, "event": "compaction_finished", "compaction_time_micros": 194061, "compaction_time_cpu_micros": 25554, "output_level": 6, "num_output_files": 1, "total_output_size": 12640164, "num_input_records": 6793, "num_output_records": 5781, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479613128, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479619990, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.415557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.620321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.620331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.620335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.620338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:14:39.620341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:40.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:40.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 16:14:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:42.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 16:14:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:44.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:14:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:44.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:14:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 16:14:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:46 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:46.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:46.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:47.186Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:14:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:47.186Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:14:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:47.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:14:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:14:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:14:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:14:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:14:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:14:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:48.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:48.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 23 16:14:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:50.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:50.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:51 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 23 16:14:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:14:51.872 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:14:51.873 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:14:51.873 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:52.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 23 16:14:53 np0005532761 podman[272009]: 2025-11-23 21:14:53.531518804 +0000 UTC m=+0.053293564 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 23 16:14:53 np0005532761 podman[272008]: 2025-11-23 21:14:53.571816884 +0000 UTC m=+0.096421829 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:14:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:54.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:54.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Nov 23 16:14:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:14:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:14:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:14:56 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:14:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:14:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:14:57.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:14:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 16:14:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:57] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:14:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:14:57] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:14:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:14:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:58.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:15:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:00.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:15:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:02.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:03 np0005532761 nova_compute[257263]: 2025-11-23 21:15:03.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:15:03
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.nfs']
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:15:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:15:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:15:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:15:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:04.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:04.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 23 16:15:05 np0005532761 podman[272067]: 2025-11-23 21:15:05.553354869 +0000 UTC m=+0.064712307 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 16:15:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:06.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:07.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:15:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 12 KiB/s wr, 2 op/s
Nov 23 16:15:07 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:15:07.712 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:15:07 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:15:07.713 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:15:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:15:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:15:08 np0005532761 nova_compute[257263]: 2025-11-23 21:15:08.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:15:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:08.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:09 np0005532761 nova_compute[257263]: 2025-11-23 21:15:09.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:09 np0005532761 nova_compute[257263]: 2025-11-23 21:15:09.033 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:15:09 np0005532761 nova_compute[257263]: 2025-11-23 21:15:09.033 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:15:09 np0005532761 nova_compute[257263]: 2025-11-23 21:15:09.044 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:15:09 np0005532761 nova_compute[257263]: 2025-11-23 21:15:09.044 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:09 np0005532761 nova_compute[257263]: 2025-11-23 21:15:09.044 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 30 op/s
Nov 23 16:15:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:15:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:15:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:10 np0005532761 nova_compute[257263]: 2025-11-23 21:15:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:10 np0005532761 nova_compute[257263]: 2025-11-23 21:15:10.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:15:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=infra.usagestats t=2025-11-23T21:15:10.181634546Z level=info msg="Usage stats are ready to report"
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:15:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:10.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:10.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:12 np0005532761 nova_compute[257263]: 2025-11-23 21:15:12.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:12.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:15:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.9 KiB/s wr, 31 op/s
Nov 23 16:15:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:12.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:15:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:15:13 np0005532761 podman[272358]: 2025-11-23 21:15:13.181700222 +0000 UTC m=+0.044036989 container create 901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:15:13 np0005532761 systemd[1]: Started libpod-conmon-901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82.scope.
Nov 23 16:15:13 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:15:13 np0005532761 podman[272358]: 2025-11-23 21:15:13.255632453 +0000 UTC m=+0.117969240 container init 901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_solomon, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:15:13 np0005532761 podman[272358]: 2025-11-23 21:15:13.160929181 +0000 UTC m=+0.023265958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:15:13 np0005532761 podman[272358]: 2025-11-23 21:15:13.263245505 +0000 UTC m=+0.125582252 container start 901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:15:13 np0005532761 podman[272358]: 2025-11-23 21:15:13.266319147 +0000 UTC m=+0.128655884 container attach 901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 16:15:13 np0005532761 pedantic_solomon[272375]: 167 167
Nov 23 16:15:13 np0005532761 systemd[1]: libpod-901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82.scope: Deactivated successfully.
Nov 23 16:15:13 np0005532761 podman[272358]: 2025-11-23 21:15:13.269607654 +0000 UTC m=+0.131944401 container died 901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:15:13 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4e9f9a4bb76e7dbbe0f0cff2f1193263f96a585e56b2433d64aa7a5a329281f6-merged.mount: Deactivated successfully.
Nov 23 16:15:13 np0005532761 podman[272358]: 2025-11-23 21:15:13.305418453 +0000 UTC m=+0.167755200 container remove 901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_solomon, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 16:15:13 np0005532761 systemd[1]: libpod-conmon-901e0363f252ed092d0244438c3f5359968c2331fec3afda8f668b7395d49e82.scope: Deactivated successfully.
Nov 23 16:15:13 np0005532761 podman[272399]: 2025-11-23 21:15:13.459423468 +0000 UTC m=+0.042537039 container create 2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hodgkin, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:15:13 np0005532761 systemd[1]: Started libpod-conmon-2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7.scope.
Nov 23 16:15:13 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:15:13 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d61d8caf54dd603f86c3f4d8a1f89b6015130bf62e15320dd1c3cf561856240/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:13 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d61d8caf54dd603f86c3f4d8a1f89b6015130bf62e15320dd1c3cf561856240/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:13 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d61d8caf54dd603f86c3f4d8a1f89b6015130bf62e15320dd1c3cf561856240/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:13 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d61d8caf54dd603f86c3f4d8a1f89b6015130bf62e15320dd1c3cf561856240/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:13 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d61d8caf54dd603f86c3f4d8a1f89b6015130bf62e15320dd1c3cf561856240/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:13 np0005532761 podman[272399]: 2025-11-23 21:15:13.533323669 +0000 UTC m=+0.116437250 container init 2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:15:13 np0005532761 podman[272399]: 2025-11-23 21:15:13.438491654 +0000 UTC m=+0.021605255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:15:13 np0005532761 podman[272399]: 2025-11-23 21:15:13.541978508 +0000 UTC m=+0.125092069 container start 2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:15:13 np0005532761 podman[272399]: 2025-11-23 21:15:13.544882236 +0000 UTC m=+0.127995797 container attach 2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:15:13 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 16:15:13 np0005532761 keen_hodgkin[272415]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:15:13 np0005532761 keen_hodgkin[272415]: --> All data devices are unavailable
Nov 23 16:15:13 np0005532761 systemd[1]: libpod-2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7.scope: Deactivated successfully.
Nov 23 16:15:13 np0005532761 podman[272399]: 2025-11-23 21:15:13.877395276 +0000 UTC m=+0.460508837 container died 2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 16:15:13 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4d61d8caf54dd603f86c3f4d8a1f89b6015130bf62e15320dd1c3cf561856240-merged.mount: Deactivated successfully.
Nov 23 16:15:13 np0005532761 podman[272399]: 2025-11-23 21:15:13.923723585 +0000 UTC m=+0.506837166 container remove 2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:15:13 np0005532761 systemd[1]: libpod-conmon-2443777f04345ab28ecde184e45d3d35af1833b79337f956059380900cb9dfe7.scope: Deactivated successfully.
Nov 23 16:15:14 np0005532761 podman[272535]: 2025-11-23 21:15:14.44608396 +0000 UTC m=+0.041235504 container create 3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 16:15:14 np0005532761 systemd[1]: Started libpod-conmon-3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012.scope.
Nov 23 16:15:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:15:14 np0005532761 podman[272535]: 2025-11-23 21:15:14.427048076 +0000 UTC m=+0.022199640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:15:14 np0005532761 podman[272535]: 2025-11-23 21:15:14.525701742 +0000 UTC m=+0.120853306 container init 3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:15:14 np0005532761 podman[272535]: 2025-11-23 21:15:14.533352935 +0000 UTC m=+0.128504459 container start 3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 23 16:15:14 np0005532761 busy_kirch[272552]: 167 167
Nov 23 16:15:14 np0005532761 systemd[1]: libpod-3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012.scope: Deactivated successfully.
Nov 23 16:15:14 np0005532761 podman[272535]: 2025-11-23 21:15:14.536521679 +0000 UTC m=+0.131673273 container attach 3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kirch, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:15:14 np0005532761 podman[272535]: 2025-11-23 21:15:14.538517022 +0000 UTC m=+0.133668556 container died 3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kirch, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Nov 23 16:15:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:14.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:14 np0005532761 systemd[1]: var-lib-containers-storage-overlay-981fb2ac0e83a46a425adf8975f5e8380686e69958c63c1ada7195fe876d2d6c-merged.mount: Deactivated successfully.
Nov 23 16:15:14 np0005532761 podman[272535]: 2025-11-23 21:15:14.577031584 +0000 UTC m=+0.172183118 container remove 3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kirch, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:15:14 np0005532761 systemd[1]: libpod-conmon-3d029c267b13dcca755582ae6857769747952c68bb2c8bf120037c9a0f86a012.scope: Deactivated successfully.
Nov 23 16:15:14 np0005532761 ceph-mon[74569]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 16:15:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 6.9 KiB/s wr, 31 op/s
Nov 23 16:15:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:14.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:14 np0005532761 podman[272576]: 2025-11-23 21:15:14.712124797 +0000 UTC m=+0.037112615 container create 2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:15:14 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:15:14.715 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:14 np0005532761 systemd[1]: Started libpod-conmon-2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2.scope.
Nov 23 16:15:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:15:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e46f50ea17c3c19c7770bc3c63a1215b88ac39a78058db1874ecb4a5074839e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e46f50ea17c3c19c7770bc3c63a1215b88ac39a78058db1874ecb4a5074839e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e46f50ea17c3c19c7770bc3c63a1215b88ac39a78058db1874ecb4a5074839e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e46f50ea17c3c19c7770bc3c63a1215b88ac39a78058db1874ecb4a5074839e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:14 np0005532761 podman[272576]: 2025-11-23 21:15:14.69680127 +0000 UTC m=+0.021789108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:15:14 np0005532761 podman[272576]: 2025-11-23 21:15:14.798752515 +0000 UTC m=+0.123740343 container init 2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 16:15:14 np0005532761 podman[272576]: 2025-11-23 21:15:14.804889367 +0000 UTC m=+0.129877195 container start 2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:15:14 np0005532761 podman[272576]: 2025-11-23 21:15:14.808527724 +0000 UTC m=+0.133515542 container attach 2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:15:15 np0005532761 strange_swartz[272593]: {
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:    "1": [
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:        {
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "devices": [
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "/dev/loop3"
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            ],
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "lv_name": "ceph_lv0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "lv_size": "21470642176",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "name": "ceph_lv0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "tags": {
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.cluster_name": "ceph",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.crush_device_class": "",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.encrypted": "0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.osd_id": "1",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.type": "block",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.vdo": "0",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:                "ceph.with_tpm": "0"
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            },
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "type": "block",
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:            "vg_name": "ceph_vg0"
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:        }
Nov 23 16:15:15 np0005532761 strange_swartz[272593]:    ]
Nov 23 16:15:15 np0005532761 strange_swartz[272593]: }
Nov 23 16:15:15 np0005532761 systemd[1]: libpod-2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2.scope: Deactivated successfully.
Nov 23 16:15:15 np0005532761 podman[272576]: 2025-11-23 21:15:15.102493002 +0000 UTC m=+0.427480820 container died 2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:15:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-7e46f50ea17c3c19c7770bc3c63a1215b88ac39a78058db1874ecb4a5074839e-merged.mount: Deactivated successfully.
Nov 23 16:15:15 np0005532761 podman[272576]: 2025-11-23 21:15:15.148175974 +0000 UTC m=+0.473163792 container remove 2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 23 16:15:15 np0005532761 systemd[1]: libpod-conmon-2f0ab7b38350713dfbe55567953f1b76af7db911d74d736938a44f902eea25b2.scope: Deactivated successfully.
Nov 23 16:15:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:15 np0005532761 podman[272704]: 2025-11-23 21:15:15.693367725 +0000 UTC m=+0.042437446 container create 9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 16:15:15 np0005532761 systemd[1]: Started libpod-conmon-9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d.scope.
Nov 23 16:15:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:15:15 np0005532761 podman[272704]: 2025-11-23 21:15:15.675725137 +0000 UTC m=+0.024794878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:15:15 np0005532761 podman[272704]: 2025-11-23 21:15:15.771766124 +0000 UTC m=+0.120835865 container init 9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sammet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 16:15:15 np0005532761 podman[272704]: 2025-11-23 21:15:15.779406307 +0000 UTC m=+0.128476028 container start 9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:15:15 np0005532761 beautiful_sammet[272720]: 167 167
Nov 23 16:15:15 np0005532761 systemd[1]: libpod-9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d.scope: Deactivated successfully.
Nov 23 16:15:15 np0005532761 podman[272704]: 2025-11-23 21:15:15.785111669 +0000 UTC m=+0.134181390 container attach 9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sammet, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 16:15:15 np0005532761 podman[272704]: 2025-11-23 21:15:15.785405717 +0000 UTC m=+0.134475438 container died 9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:15:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d99a2ade900d137c6fb270fd9182d260990b250652a68679f7d2203112e3efee-merged.mount: Deactivated successfully.
Nov 23 16:15:15 np0005532761 podman[272704]: 2025-11-23 21:15:15.824454182 +0000 UTC m=+0.173523903 container remove 9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 16:15:15 np0005532761 systemd[1]: libpod-conmon-9404d32c1451fa98127278b1b54515e708c62c0e3cae828542ed17b9d598824d.scope: Deactivated successfully.
Nov 23 16:15:15 np0005532761 podman[272744]: 2025-11-23 21:15:15.966575372 +0000 UTC m=+0.037241779 container create b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sanderson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:15:16 np0005532761 systemd[1]: Started libpod-conmon-b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b.scope.
Nov 23 16:15:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:16 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:15:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67c459b6c0ea3d92e5434fddfd4000b18f7ba7568e3770798c732326133c90b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67c459b6c0ea3d92e5434fddfd4000b18f7ba7568e3770798c732326133c90b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67c459b6c0ea3d92e5434fddfd4000b18f7ba7568e3770798c732326133c90b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:16 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67c459b6c0ea3d92e5434fddfd4000b18f7ba7568e3770798c732326133c90b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:16 np0005532761 podman[272744]: 2025-11-23 21:15:16.035016977 +0000 UTC m=+0.105683414 container init b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sanderson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 23 16:15:16 np0005532761 podman[272744]: 2025-11-23 21:15:16.041352346 +0000 UTC m=+0.112018763 container start b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:15:16 np0005532761 podman[272744]: 2025-11-23 21:15:16.044307974 +0000 UTC m=+0.114974391 container attach b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:15:16 np0005532761 podman[272744]: 2025-11-23 21:15:15.950336852 +0000 UTC m=+0.021003289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:15:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:16.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:16 np0005532761 lvm[272835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:15:16 np0005532761 lvm[272835]: VG ceph_vg0 finished
Nov 23 16:15:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.9 KiB/s wr, 30 op/s
Nov 23 16:15:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:16.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:16 np0005532761 elastic_sanderson[272760]: {}
Nov 23 16:15:16 np0005532761 systemd[1]: libpod-b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b.scope: Deactivated successfully.
Nov 23 16:15:16 np0005532761 podman[272744]: 2025-11-23 21:15:16.746054368 +0000 UTC m=+0.816720785 container died b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:15:16 np0005532761 systemd[1]: libpod-b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b.scope: Consumed 1.058s CPU time.
Nov 23 16:15:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b67c459b6c0ea3d92e5434fddfd4000b18f7ba7568e3770798c732326133c90b-merged.mount: Deactivated successfully.
Nov 23 16:15:16 np0005532761 podman[272744]: 2025-11-23 21:15:16.793934828 +0000 UTC m=+0.864601245 container remove b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:15:16 np0005532761 systemd[1]: libpod-conmon-b405822f27444be20f336922dd90338cdfa5e5f370670ce61eb479f438d1870b.scope: Deactivated successfully.
Nov 23 16:15:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:15:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:15:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:17.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:15:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:15:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.057 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.057 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.058 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.058 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.059 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 23 16:15:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:15:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:15:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:15:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133778263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.495 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:15:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:18.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.637 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.638 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4899MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.638 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.638 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.9 KiB/s wr, 30 op/s
Nov 23 16:15:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:18.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.701 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.702 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:15:18 np0005532761 nova_compute[257263]: 2025-11-23 21:15:18.735 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:15:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4161376870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:15:19 np0005532761 nova_compute[257263]: 2025-11-23 21:15:19.142 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:19 np0005532761 nova_compute[257263]: 2025-11-23 21:15:19.147 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:15:19 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:19 np0005532761 nova_compute[257263]: 2025-11-23 21:15:19.297 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:15:19 np0005532761 nova_compute[257263]: 2025-11-23 21:15:19.299 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:15:19 np0005532761 nova_compute[257263]: 2025-11-23 21:15:19.299 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:20 np0005532761 nova_compute[257263]: 2025-11-23 21:15:20.298 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:15:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:15:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:15:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:20.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:15:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:24 np0005532761 podman[272927]: 2025-11-23 21:15:24.531493387 +0000 UTC m=+0.054974218 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 23 16:15:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:24.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:24 np0005532761 podman[272926]: 2025-11-23 21:15:24.578468914 +0000 UTC m=+0.103064115 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:15:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:15:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:15:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:26.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:27.189Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:15:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:27.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:15:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:15:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:15:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:15:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:28.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:15:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:30.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:15:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:15:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:30.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:32.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:15:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:32.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:15:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:15:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:15:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:15:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:15:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:15:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:15:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:15:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:34.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:15:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:34.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:36.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:36 np0005532761 podman[273006]: 2025-11-23 21:15:36.670543467 +0000 UTC m=+0.060347581 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 16:15:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:15:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:36.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:37.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:15:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:37] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 23 16:15:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:37] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 23 16:15:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:38.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:15:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:38.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:40 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:40.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 16:15:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:40.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:42.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:15:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:42.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:44.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 23 16:15:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:44.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:46.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:15:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:46.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:47.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:15:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:47.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:15:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:47] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 23 16:15:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:47] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 23 16:15:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:15:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:15:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:48.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:15:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:48.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:50.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 163 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 23 16:15:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:50.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:15:51.874 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:15:51.874 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:15:51.875 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:52.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 163 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Nov 23 16:15:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:52.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 23 16:15:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:54.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:15:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:15:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:15:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:15:55 np0005532761 podman[273072]: 2025-11-23 21:15:55.524770591 +0000 UTC m=+0.044929642 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:15:55 np0005532761 podman[273071]: 2025-11-23 21:15:55.552838386 +0000 UTC m=+0.074985600 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 16:15:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:56.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 23 16:15:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:56.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:57.192Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:15:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:15:57.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:15:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:57] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:15:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:15:57] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:15:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:58.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 23 16:15:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:15:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:58.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:15:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:00.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 23 16:16:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:02.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 50 KiB/s wr, 84 op/s
Nov 23 16:16:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:02.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:16:03
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'vms', 'images']
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:16:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:16:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011073049952790258 of space, bias 1.0, pg target 0.3321914985837077 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:16:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:16:04 np0005532761 nova_compute[257263]: 2025-11-23 21:16:04.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:04.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 50 KiB/s wr, 85 op/s
Nov 23 16:16:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:04.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:16:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2330035066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:16:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:06.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Nov 23 16:16:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:06.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:07.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:07 np0005532761 podman[273149]: 2025-11-23 21:16:07.542019251 +0000 UTC m=+0.060550687 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 16:16:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:07] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:07] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:08.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Nov 23 16:16:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:08.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:08 np0005532761 ceph-mgr[74869]: [dashboard INFO request] [192.168.122.100:57882] [POST] [200] [0.002s] [4.0B] [009f0e74-41ef-4bf1-b13c-02b48673252b] /api/prometheus_receiver
Nov 23 16:16:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:10 np0005532761 nova_compute[257263]: 2025-11-23 21:16:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:10 np0005532761 nova_compute[257263]: 2025-11-23 21:16:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:10 np0005532761 nova_compute[257263]: 2025-11-23 21:16:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:10 np0005532761 nova_compute[257263]: 2025-11-23 21:16:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:10 np0005532761 nova_compute[257263]: 2025-11-23 21:16:10.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:16:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:10.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 188 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Nov 23 16:16:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:11 np0005532761 nova_compute[257263]: 2025-11-23 21:16:11.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:11 np0005532761 nova_compute[257263]: 2025-11-23 21:16:11.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:16:11 np0005532761 nova_compute[257263]: 2025-11-23 21:16:11.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:16:11 np0005532761 nova_compute[257263]: 2025-11-23 21:16:11.048 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:16:12 np0005532761 nova_compute[257263]: 2025-11-23 21:16:12.042 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:12.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 188 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Nov 23 16:16:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:12.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:14.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 23 16:16:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:14.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:16.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 23 16:16:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:16.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:17.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:17] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:17] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:17 np0005532761 podman[273304]: 2025-11-23 21:16:17.840694038 +0000 UTC m=+0.123022873 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 16:16:17 np0005532761 podman[273304]: 2025-11-23 21:16:17.976206473 +0000 UTC m=+0.258535268 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.065 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.065 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.066 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.066 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.066 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:16:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:16:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:16:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724600131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.520 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:18 np0005532761 podman[273445]: 2025-11-23 21:16:18.574478002 +0000 UTC m=+0.055006890 container exec c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:16:18 np0005532761 podman[273445]: 2025-11-23 21:16:18.586124352 +0000 UTC m=+0.066653220 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:16:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:18.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.672 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.674 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4861MB free_disk=59.89735412597656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.674 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.674 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 23 16:16:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.743 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.743 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:16:18 np0005532761 nova_compute[257263]: 2025-11-23 21:16:18.761 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:18.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:18 np0005532761 podman[273537]: 2025-11-23 21:16:18.955402597 +0000 UTC m=+0.072958177 container exec 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:16:18 np0005532761 podman[273537]: 2025-11-23 21:16:18.967132018 +0000 UTC m=+0.084687598 container exec_died 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 16:16:19 np0005532761 podman[273620]: 2025-11-23 21:16:19.176279605 +0000 UTC m=+0.056833508 container exec cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 16:16:19 np0005532761 podman[273620]: 2025-11-23 21:16:19.217454288 +0000 UTC m=+0.098008221 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 16:16:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:16:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473381753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:16:19 np0005532761 nova_compute[257263]: 2025-11-23 21:16:19.237 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:19 np0005532761 nova_compute[257263]: 2025-11-23 21:16:19.244 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:16:19 np0005532761 nova_compute[257263]: 2025-11-23 21:16:19.257 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:16:19 np0005532761 nova_compute[257263]: 2025-11-23 21:16:19.258 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:16:19 np0005532761 nova_compute[257263]: 2025-11-23 21:16:19.258 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:19 np0005532761 podman[273687]: 2025-11-23 21:16:19.43959314 +0000 UTC m=+0.050489901 container exec 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived)
Nov 23 16:16:19 np0005532761 podman[273687]: 2025-11-23 21:16:19.452116242 +0000 UTC m=+0.063012973 container exec_died 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container)
Nov 23 16:16:19 np0005532761 podman[273752]: 2025-11-23 21:16:19.637396867 +0000 UTC m=+0.049806963 container exec 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:16:19 np0005532761 podman[273752]: 2025-11-23 21:16:19.689222982 +0000 UTC m=+0.101633088 container exec_died 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:16:19 np0005532761 podman[273825]: 2025-11-23 21:16:19.899975062 +0000 UTC m=+0.061973825 container exec 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 16:16:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:20 np0005532761 podman[273825]: 2025-11-23 21:16:20.063230182 +0000 UTC m=+0.225228925 container exec_died 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 16:16:20 np0005532761 nova_compute[257263]: 2025-11-23 21:16:20.258 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:20 np0005532761 podman[273937]: 2025-11-23 21:16:20.376910753 +0000 UTC m=+0.042993922 container exec 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:16:20 np0005532761 podman[273937]: 2025-11-23 21:16:20.404513374 +0000 UTC m=+0.070596523 container exec_died 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:16:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:16:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:16:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:20 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:16:20.618 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:16:20 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:16:20.619 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:16:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:20.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 23 16:16:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:16:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 123 KiB/s wr, 24 op/s
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:16:21 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:16:21.620 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:16:21 np0005532761 podman[274156]: 2025-11-23 21:16:21.73026825 +0000 UTC m=+0.062415596 container create c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:16:21 np0005532761 systemd[1]: Started libpod-conmon-c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612.scope.
Nov 23 16:16:21 np0005532761 podman[274156]: 2025-11-23 21:16:21.691930924 +0000 UTC m=+0.024078330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:16:21 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:16:21 np0005532761 podman[274156]: 2025-11-23 21:16:21.834964778 +0000 UTC m=+0.167112124 container init c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:16:21 np0005532761 podman[274156]: 2025-11-23 21:16:21.84375122 +0000 UTC m=+0.175898576 container start c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 16:16:21 np0005532761 podman[274156]: 2025-11-23 21:16:21.847462089 +0000 UTC m=+0.179609435 container attach c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 16:16:21 np0005532761 beautiful_shamir[274173]: 167 167
Nov 23 16:16:21 np0005532761 systemd[1]: libpod-c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612.scope: Deactivated successfully.
Nov 23 16:16:21 np0005532761 podman[274156]: 2025-11-23 21:16:21.850845318 +0000 UTC m=+0.182992674 container died c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Nov 23 16:16:21 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c5d38e9a5116e85b94a4dfbb5f0cd95c2560884cc07ef0563a9a4a7003ed75ff-merged.mount: Deactivated successfully.
Nov 23 16:16:21 np0005532761 podman[274156]: 2025-11-23 21:16:21.971645573 +0000 UTC m=+0.303792929 container remove c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:16:22 np0005532761 systemd[1]: libpod-conmon-c644fb904fcc6c7c378bda237424c3c6a8f070671a7c840a12a93abe1f34f612.scope: Deactivated successfully.
Nov 23 16:16:22 np0005532761 podman[274197]: 2025-11-23 21:16:22.150494197 +0000 UTC m=+0.047315155 container create 56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wilson, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 16:16:22 np0005532761 systemd[1]: Started libpod-conmon-56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98.scope.
Nov 23 16:16:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:16:22 np0005532761 podman[274197]: 2025-11-23 21:16:22.130464146 +0000 UTC m=+0.027285104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:16:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9ca43fa7987003a29b43908e1d579f1a789cb481e8d83ef95218be4d4c4b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9ca43fa7987003a29b43908e1d579f1a789cb481e8d83ef95218be4d4c4b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9ca43fa7987003a29b43908e1d579f1a789cb481e8d83ef95218be4d4c4b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9ca43fa7987003a29b43908e1d579f1a789cb481e8d83ef95218be4d4c4b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9ca43fa7987003a29b43908e1d579f1a789cb481e8d83ef95218be4d4c4b37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:22 np0005532761 podman[274197]: 2025-11-23 21:16:22.239300123 +0000 UTC m=+0.136121061 container init 56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wilson, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:16:22 np0005532761 podman[274197]: 2025-11-23 21:16:22.247119741 +0000 UTC m=+0.143940699 container start 56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:16:22 np0005532761 podman[274197]: 2025-11-23 21:16:22.251826535 +0000 UTC m=+0.148647493 container attach 56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wilson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 16:16:22 np0005532761 romantic_wilson[274213]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:16:22 np0005532761 romantic_wilson[274213]: --> All data devices are unavailable
Nov 23 16:16:22 np0005532761 systemd[1]: libpod-56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98.scope: Deactivated successfully.
Nov 23 16:16:22 np0005532761 podman[274197]: 2025-11-23 21:16:22.599682982 +0000 UTC m=+0.496503930 container died 56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:16:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-cd9ca43fa7987003a29b43908e1d579f1a789cb481e8d83ef95218be4d4c4b37-merged.mount: Deactivated successfully.
Nov 23 16:16:22 np0005532761 podman[274197]: 2025-11-23 21:16:22.646248118 +0000 UTC m=+0.543069056 container remove 56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wilson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 16:16:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:22.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:22 np0005532761 systemd[1]: libpod-conmon-56167ec0d441bd5e6850d920a136fbb535d6b9ad72b257bbe1e4c6e55206de98.scope: Deactivated successfully.
Nov 23 16:16:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:22.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:23 np0005532761 nova_compute[257263]: 2025-11-23 21:16:23.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 123 KiB/s wr, 24 op/s
Nov 23 16:16:23 np0005532761 podman[274331]: 2025-11-23 21:16:23.243724915 +0000 UTC m=+0.035604015 container create b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:16:23 np0005532761 systemd[1]: Started libpod-conmon-b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9.scope.
Nov 23 16:16:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:16:23 np0005532761 podman[274331]: 2025-11-23 21:16:23.320617945 +0000 UTC m=+0.112497045 container init b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:16:23 np0005532761 podman[274331]: 2025-11-23 21:16:23.228011439 +0000 UTC m=+0.019890539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:16:23 np0005532761 podman[274331]: 2025-11-23 21:16:23.326373137 +0000 UTC m=+0.118252227 container start b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 23 16:16:23 np0005532761 compassionate_jemison[274347]: 167 167
Nov 23 16:16:23 np0005532761 systemd[1]: libpod-b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9.scope: Deactivated successfully.
Nov 23 16:16:23 np0005532761 podman[274331]: 2025-11-23 21:16:23.331627997 +0000 UTC m=+0.123507097 container attach b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:16:23 np0005532761 podman[274331]: 2025-11-23 21:16:23.331960686 +0000 UTC m=+0.123839776 container died b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:16:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1607abcc152803b35f9223c9becb838a91be76b45e5c9c87f71409f3a8ba8cb6-merged.mount: Deactivated successfully.
Nov 23 16:16:23 np0005532761 podman[274331]: 2025-11-23 21:16:23.373112918 +0000 UTC m=+0.164991998 container remove b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 16:16:23 np0005532761 systemd[1]: libpod-conmon-b4eb6c35749c63d9327ce0f9d47bacc5f84a51531c18546cca9981efa69e2ed9.scope: Deactivated successfully.
Nov 23 16:16:23 np0005532761 podman[274370]: 2025-11-23 21:16:23.532488846 +0000 UTC m=+0.040816495 container create a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lamarr, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:16:23 np0005532761 systemd[1]: Started libpod-conmon-a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae.scope.
Nov 23 16:16:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:16:23 np0005532761 podman[274370]: 2025-11-23 21:16:23.512961897 +0000 UTC m=+0.021289556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:16:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa9a52409e6cdc512b030d8ea29c1245df2fc90c4037d1b42966cbdd59524bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa9a52409e6cdc512b030d8ea29c1245df2fc90c4037d1b42966cbdd59524bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa9a52409e6cdc512b030d8ea29c1245df2fc90c4037d1b42966cbdd59524bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa9a52409e6cdc512b030d8ea29c1245df2fc90c4037d1b42966cbdd59524bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:23 np0005532761 podman[274370]: 2025-11-23 21:16:23.621387813 +0000 UTC m=+0.129715472 container init a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:16:23 np0005532761 podman[274370]: 2025-11-23 21:16:23.636578576 +0000 UTC m=+0.144906195 container start a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:16:23 np0005532761 podman[274370]: 2025-11-23 21:16:23.640421108 +0000 UTC m=+0.148748767 container attach a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lamarr, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]: {
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:    "1": [
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:        {
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "devices": [
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "/dev/loop3"
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            ],
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "lv_name": "ceph_lv0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "lv_size": "21470642176",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "name": "ceph_lv0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "tags": {
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.cluster_name": "ceph",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.crush_device_class": "",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.encrypted": "0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.osd_id": "1",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.type": "block",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.vdo": "0",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:                "ceph.with_tpm": "0"
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            },
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "type": "block",
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:            "vg_name": "ceph_vg0"
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:        }
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]:    ]
Nov 23 16:16:23 np0005532761 awesome_lamarr[274387]: }
Nov 23 16:16:23 np0005532761 systemd[1]: libpod-a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae.scope: Deactivated successfully.
Nov 23 16:16:23 np0005532761 podman[274370]: 2025-11-23 21:16:23.93643769 +0000 UTC m=+0.444765319 container died a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 16:16:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-8aa9a52409e6cdc512b030d8ea29c1245df2fc90c4037d1b42966cbdd59524bd-merged.mount: Deactivated successfully.
Nov 23 16:16:23 np0005532761 podman[274370]: 2025-11-23 21:16:23.978479295 +0000 UTC m=+0.486806924 container remove a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 16:16:23 np0005532761 systemd[1]: libpod-conmon-a43e7d2c2fb274765a21c9af235a7e6d5b86e7515491c018b96abda8f9ed7cae.scope: Deactivated successfully.
Nov 23 16:16:24 np0005532761 podman[274500]: 2025-11-23 21:16:24.53172584 +0000 UTC m=+0.036331725 container create 4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_mendel, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:16:24 np0005532761 systemd[1]: Started libpod-conmon-4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd.scope.
Nov 23 16:16:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:16:24 np0005532761 podman[274500]: 2025-11-23 21:16:24.598309227 +0000 UTC m=+0.102915142 container init 4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:16:24 np0005532761 podman[274500]: 2025-11-23 21:16:24.604150781 +0000 UTC m=+0.108756666 container start 4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_mendel, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:16:24 np0005532761 podman[274500]: 2025-11-23 21:16:24.607271174 +0000 UTC m=+0.111877089 container attach 4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 16:16:24 np0005532761 vigorous_mendel[274517]: 167 167
Nov 23 16:16:24 np0005532761 systemd[1]: libpod-4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd.scope: Deactivated successfully.
Nov 23 16:16:24 np0005532761 podman[274500]: 2025-11-23 21:16:24.610694585 +0000 UTC m=+0.115300500 container died 4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:16:24 np0005532761 podman[274500]: 2025-11-23 21:16:24.515990823 +0000 UTC m=+0.020596718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:16:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-53b15fbfe76a7b867b77b9a4772764a9ab453a2dbdf5eb72a8ceeb1035f6df79-merged.mount: Deactivated successfully.
Nov 23 16:16:24 np0005532761 podman[274500]: 2025-11-23 21:16:24.648850447 +0000 UTC m=+0.153456332 container remove 4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Nov 23 16:16:24 np0005532761 systemd[1]: libpod-conmon-4974cf77eace377927ae8024196965b6eb458e6d51ebe88518af73b5284b79fd.scope: Deactivated successfully.
Nov 23 16:16:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:24 np0005532761 podman[274542]: 2025-11-23 21:16:24.802411661 +0000 UTC m=+0.044243655 container create 089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hodgkin, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 23 16:16:24 np0005532761 systemd[1]: Started libpod-conmon-089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701.scope.
Nov 23 16:16:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:16:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436096c8756e5853e898257e87a71833f41e153f9227c94d806118339150e505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436096c8756e5853e898257e87a71833f41e153f9227c94d806118339150e505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436096c8756e5853e898257e87a71833f41e153f9227c94d806118339150e505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:24 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436096c8756e5853e898257e87a71833f41e153f9227c94d806118339150e505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:16:24 np0005532761 podman[274542]: 2025-11-23 21:16:24.876478014 +0000 UTC m=+0.118310018 container init 089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 23 16:16:24 np0005532761 podman[274542]: 2025-11-23 21:16:24.785986325 +0000 UTC m=+0.027818329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:16:24 np0005532761 podman[274542]: 2025-11-23 21:16:24.888513924 +0000 UTC m=+0.130345928 container start 089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 16:16:24 np0005532761 podman[274542]: 2025-11-23 21:16:24.89250878 +0000 UTC m=+0.134340774 container attach 089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:16:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 20 KiB/s wr, 2 op/s
Nov 23 16:16:25 np0005532761 lvm[274634]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:16:25 np0005532761 lvm[274634]: VG ceph_vg0 finished
Nov 23 16:16:25 np0005532761 busy_hodgkin[274559]: {}
Nov 23 16:16:25 np0005532761 systemd[1]: libpod-089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701.scope: Deactivated successfully.
Nov 23 16:16:25 np0005532761 podman[274542]: 2025-11-23 21:16:25.557401607 +0000 UTC m=+0.799233581 container died 089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hodgkin, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:16:25 np0005532761 systemd[1]: libpod-089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701.scope: Consumed 1.042s CPU time.
Nov 23 16:16:25 np0005532761 systemd[1]: var-lib-containers-storage-overlay-436096c8756e5853e898257e87a71833f41e153f9227c94d806118339150e505-merged.mount: Deactivated successfully.
Nov 23 16:16:25 np0005532761 podman[274542]: 2025-11-23 21:16:25.601924838 +0000 UTC m=+0.843756812 container remove 089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 23 16:16:25 np0005532761 systemd[1]: libpod-conmon-089bc29f48274990071200ef523dc6dff7e50c610a59da7696ee2a6e87249701.scope: Deactivated successfully.
Nov 23 16:16:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:25 np0005532761 podman[274639]: 2025-11-23 21:16:25.659767152 +0000 UTC m=+0.062179091 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 23 16:16:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:16:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:16:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:25 np0005532761 podman[274650]: 2025-11-23 21:16:25.698304314 +0000 UTC m=+0.100823916 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 23 16:16:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:26.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:26.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 20 KiB/s wr, 2 op/s
Nov 23 16:16:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:27.195Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:16:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:27.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:16:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:27] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Nov 23 16:16:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:27] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Nov 23 16:16:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:28.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:28.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:28.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 135 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 23 KiB/s wr, 30 op/s
Nov 23 16:16:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:30.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:30.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 9.0 KiB/s wr, 34 op/s
Nov 23 16:16:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:32.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:32.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.8 KiB/s wr, 30 op/s
Nov 23 16:16:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:16:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:16:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:16:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:16:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:16:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:16:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:16:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:16:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:34.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:34.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Nov 23 16:16:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:36.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Nov 23 16:16:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:37.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:37] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:37] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:38.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:38 np0005532761 podman[274756]: 2025-11-23 21:16:38.678767145 +0000 UTC m=+0.095909645 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 23 16:16:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:38.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Nov 23 16:16:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:40.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:40.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Nov 23 16:16:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:42.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 16:16:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:44.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:44.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 16:16:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:46.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:46.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:16:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:47.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:47] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:47] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 23 16:16:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:16:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:16:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:48.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:16:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:50.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 16:16:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:16:51.875 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:16:51.875 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:16:51.875 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:52.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:52.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:16:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:54.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:54.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:16:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:56 np0005532761 podman[274819]: 2025-11-23 21:16:56.539690991 +0000 UTC m=+0.056700505 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 16:16:56 np0005532761 podman[274818]: 2025-11-23 21:16:56.57056459 +0000 UTC m=+0.089853664 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 16:16:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:56.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:16:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:57.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:16:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:57.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:16:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:57.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:57] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Nov 23 16:16:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:16:57] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Nov 23 16:16:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:16:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:16:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:16:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:16:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:16:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:58.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:16:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:58.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:16:58.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:16:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 73 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Nov 23 16:17:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:00.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:00.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:17:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:02.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:17:03
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'backups', 'volumes', 'vms', '.mgr', '.rgw.root']
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 16:17:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:17:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:17:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:17:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:04.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:04.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:05 np0005532761 nova_compute[257263]: 2025-11-23 21:17:05.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 16:17:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:06 np0005532761 nova_compute[257263]: 2025-11-23 21:17:06.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:06 np0005532761 nova_compute[257263]: 2025-11-23 21:17:06.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:17:06 np0005532761 nova_compute[257263]: 2025-11-23 21:17:06.063 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:17:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:06.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 23 16:17:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:07.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:07] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:17:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:07] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:17:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:17:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981719019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:17:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:17:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981719019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:17:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:08.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:08.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:08.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 23 16:17:09 np0005532761 podman[274900]: 2025-11-23 21:17:09.571437091 +0000 UTC m=+0.084970596 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 16:17:10 np0005532761 nova_compute[257263]: 2025-11-23 21:17:10.064 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:10.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:11 np0005532761 nova_compute[257263]: 2025-11-23 21:17:11.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 160 KiB/s wr, 87 op/s
Nov 23 16:17:12 np0005532761 nova_compute[257263]: 2025-11-23 21:17:12.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:12 np0005532761 nova_compute[257263]: 2025-11-23 21:17:12.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:12 np0005532761 nova_compute[257263]: 2025-11-23 21:17:12.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:17:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:12.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:12.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:13 np0005532761 nova_compute[257263]: 2025-11-23 21:17:13.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:13 np0005532761 nova_compute[257263]: 2025-11-23 21:17:13.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:17:13 np0005532761 nova_compute[257263]: 2025-11-23 21:17:13.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:17:13 np0005532761 nova_compute[257263]: 2025-11-23 21:17:13.049 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:17:13 np0005532761 nova_compute[257263]: 2025-11-23 21:17:13.049 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:13 np0005532761 nova_compute[257263]: 2025-11-23 21:17:13.049 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:17:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 16:17:14 np0005532761 nova_compute[257263]: 2025-11-23 21:17:14.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:14 np0005532761 nova_compute[257263]: 2025-11-23 21:17:14.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:14.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:14.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 23 16:17:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:16.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:16.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 23 16:17:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:17.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:17] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:17:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:17] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 23 16:17:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:18 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.042 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.070 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.070 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.071 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.071 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.071 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:17:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:17:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:17:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738017921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.544 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.706 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.708 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4928MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.708 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.709 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:18.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.817 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.817 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:17:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:18.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:17:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:18.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:17:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:18.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.888 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing inventories for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.993 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating ProviderTree inventory for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:17:18 np0005532761 nova_compute[257263]: 2025-11-23 21:17:18.993 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating inventory in ProviderTree for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.015 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing aggregate associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.036 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing trait associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.048 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 115 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 112 op/s
Nov 23 16:17:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:17:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964643656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.504 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.509 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.530 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.531 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:17:19 np0005532761 nova_compute[257263]: 2025-11-23 21:17:19.531 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:20.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 519 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 23 16:17:22 np0005532761 nova_compute[257263]: 2025-11-23 21:17:22.524 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:22.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:22.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 23 16:17:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:24.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 23 16:17:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:26.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:26.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:17:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:17:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:17:27 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:17:27.028 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:17:27 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:17:27.030 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:17:27 np0005532761 podman[275114]: 2025-11-23 21:17:27.188368195 +0000 UTC m=+0.084318528 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:17:27 np0005532761 podman[275100]: 2025-11-23 21:17:27.193644065 +0000 UTC m=+0.090487402 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 16:17:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:27.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:17:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:27 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:17:27 np0005532761 podman[275224]: 2025-11-23 21:17:27.530709076 +0000 UTC m=+0.038219735 container create 2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:17:27 np0005532761 systemd[1]: Started libpod-conmon-2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc.scope.
Nov 23 16:17:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:17:27 np0005532761 podman[275224]: 2025-11-23 21:17:27.607111212 +0000 UTC m=+0.114621911 container init 2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:17:27 np0005532761 podman[275224]: 2025-11-23 21:17:27.512999026 +0000 UTC m=+0.020509705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:17:27 np0005532761 podman[275224]: 2025-11-23 21:17:27.61644831 +0000 UTC m=+0.123958979 container start 2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_archimedes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:17:27 np0005532761 podman[275224]: 2025-11-23 21:17:27.620416345 +0000 UTC m=+0.127927014 container attach 2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 16:17:27 np0005532761 inspiring_archimedes[275240]: 167 167
Nov 23 16:17:27 np0005532761 systemd[1]: libpod-2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc.scope: Deactivated successfully.
Nov 23 16:17:27 np0005532761 podman[275224]: 2025-11-23 21:17:27.622151911 +0000 UTC m=+0.129662590 container died 2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:17:27 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f2274e75b93ba80d2937f9ba94dd6d7f117db3a37bb7d6b6d6987835b23c6dba-merged.mount: Deactivated successfully.
Nov 23 16:17:27 np0005532761 podman[275224]: 2025-11-23 21:17:27.656570104 +0000 UTC m=+0.164080753 container remove 2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_archimedes, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 16:17:27 np0005532761 systemd[1]: libpod-conmon-2472f415893a3ab52db157b4654ae5116d4cf216dc15bf201d2e3392853dc3dc.scope: Deactivated successfully.
Nov 23 16:17:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:27] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:17:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:27] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 23 16:17:27 np0005532761 podman[275264]: 2025-11-23 21:17:27.810089686 +0000 UTC m=+0.036783977 container create f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lumiere, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:17:27 np0005532761 systemd[1]: Started libpod-conmon-f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781.scope.
Nov 23 16:17:27 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:17:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c0e3bb85fb9400a317fa4e28b16e8a0e15acf211296dcd20972d89b86e4282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c0e3bb85fb9400a317fa4e28b16e8a0e15acf211296dcd20972d89b86e4282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c0e3bb85fb9400a317fa4e28b16e8a0e15acf211296dcd20972d89b86e4282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c0e3bb85fb9400a317fa4e28b16e8a0e15acf211296dcd20972d89b86e4282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:27 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c0e3bb85fb9400a317fa4e28b16e8a0e15acf211296dcd20972d89b86e4282/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:27 np0005532761 podman[275264]: 2025-11-23 21:17:27.793986239 +0000 UTC m=+0.020680560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:17:27 np0005532761 podman[275264]: 2025-11-23 21:17:27.893631342 +0000 UTC m=+0.120325653 container init f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:17:27 np0005532761 podman[275264]: 2025-11-23 21:17:27.900454533 +0000 UTC m=+0.127148824 container start f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lumiere, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 23 16:17:27 np0005532761 podman[275264]: 2025-11-23 21:17:27.903240537 +0000 UTC m=+0.129934828 container attach f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 16:17:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:28 np0005532761 wonderful_lumiere[275281]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:17:28 np0005532761 wonderful_lumiere[275281]: --> All data devices are unavailable
Nov 23 16:17:28 np0005532761 systemd[1]: libpod-f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781.scope: Deactivated successfully.
Nov 23 16:17:28 np0005532761 podman[275264]: 2025-11-23 21:17:28.223581224 +0000 UTC m=+0.450275515 container died f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:17:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d2c0e3bb85fb9400a317fa4e28b16e8a0e15acf211296dcd20972d89b86e4282-merged.mount: Deactivated successfully.
Nov 23 16:17:28 np0005532761 podman[275264]: 2025-11-23 21:17:28.271912946 +0000 UTC m=+0.498607237 container remove f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lumiere, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:17:28 np0005532761 systemd[1]: libpod-conmon-f9148685b44855c1ec3143621b7148a1c6fb8f8ef6bf4c0b4b519636d1cb7781.scope: Deactivated successfully.
Nov 23 16:17:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:28.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:28 np0005532761 podman[275399]: 2025-11-23 21:17:28.795848903 +0000 UTC m=+0.040627698 container create c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:17:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:28.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:28 np0005532761 systemd[1]: Started libpod-conmon-c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871.scope.
Nov 23 16:17:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:28 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:17:28 np0005532761 podman[275399]: 2025-11-23 21:17:28.77647026 +0000 UTC m=+0.021249085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:17:28 np0005532761 podman[275399]: 2025-11-23 21:17:28.878344932 +0000 UTC m=+0.123123757 container init c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:17:28 np0005532761 podman[275399]: 2025-11-23 21:17:28.887874765 +0000 UTC m=+0.132653560 container start c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_wiles, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:17:28 np0005532761 podman[275399]: 2025-11-23 21:17:28.891410329 +0000 UTC m=+0.136189174 container attach c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:17:28 np0005532761 blissful_wiles[275415]: 167 167
Nov 23 16:17:28 np0005532761 systemd[1]: libpod-c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871.scope: Deactivated successfully.
Nov 23 16:17:28 np0005532761 podman[275399]: 2025-11-23 21:17:28.895065945 +0000 UTC m=+0.139844820 container died c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_wiles, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:17:28 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9fb995289be55ab53eec893271f02df48e2b4033a9483c7d4986756064609dc4-merged.mount: Deactivated successfully.
Nov 23 16:17:28 np0005532761 podman[275399]: 2025-11-23 21:17:28.932739455 +0000 UTC m=+0.177518260 container remove c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_wiles, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 16:17:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 23 16:17:28 np0005532761 systemd[1]: libpod-conmon-c3b246715da22f310a5ba82de5fb06eb8157009f1637e5369c06c91a6a9e9871.scope: Deactivated successfully.
Nov 23 16:17:29 np0005532761 podman[275441]: 2025-11-23 21:17:29.131898897 +0000 UTC m=+0.046911135 container create e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:17:29 np0005532761 systemd[1]: Started libpod-conmon-e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c.scope.
Nov 23 16:17:29 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:17:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b173829ee42508404c3852c311fcd0b663323de622b455246f74019e7857b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b173829ee42508404c3852c311fcd0b663323de622b455246f74019e7857b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b173829ee42508404c3852c311fcd0b663323de622b455246f74019e7857b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:29 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b173829ee42508404c3852c311fcd0b663323de622b455246f74019e7857b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:29 np0005532761 podman[275441]: 2025-11-23 21:17:29.112075962 +0000 UTC m=+0.027088250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:17:29 np0005532761 podman[275441]: 2025-11-23 21:17:29.20965565 +0000 UTC m=+0.124667908 container init e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Nov 23 16:17:29 np0005532761 podman[275441]: 2025-11-23 21:17:29.21683476 +0000 UTC m=+0.131847008 container start e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:17:29 np0005532761 podman[275441]: 2025-11-23 21:17:29.220918939 +0000 UTC m=+0.135931267 container attach e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 23 16:17:29 np0005532761 elated_burnell[275457]: {
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:    "1": [
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:        {
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "devices": [
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "/dev/loop3"
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            ],
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "lv_name": "ceph_lv0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "lv_size": "21470642176",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "name": "ceph_lv0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "tags": {
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.cluster_name": "ceph",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.crush_device_class": "",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.encrypted": "0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.osd_id": "1",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.type": "block",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.vdo": "0",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:                "ceph.with_tpm": "0"
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            },
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "type": "block",
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:            "vg_name": "ceph_vg0"
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:        }
Nov 23 16:17:29 np0005532761 elated_burnell[275457]:    ]
Nov 23 16:17:29 np0005532761 elated_burnell[275457]: }
Nov 23 16:17:29 np0005532761 systemd[1]: libpod-e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c.scope: Deactivated successfully.
Nov 23 16:17:29 np0005532761 conmon[275457]: conmon e73b8e22260e81d5560e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c.scope/container/memory.events
Nov 23 16:17:29 np0005532761 podman[275441]: 2025-11-23 21:17:29.549513675 +0000 UTC m=+0.464525903 container died e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_burnell, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 16:17:29 np0005532761 systemd[1]: var-lib-containers-storage-overlay-7f6b173829ee42508404c3852c311fcd0b663323de622b455246f74019e7857b-merged.mount: Deactivated successfully.
Nov 23 16:17:29 np0005532761 podman[275441]: 2025-11-23 21:17:29.591673753 +0000 UTC m=+0.506686001 container remove e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 23 16:17:29 np0005532761 systemd[1]: libpod-conmon-e73b8e22260e81d5560e0fdb0ceebf52563f2f2b8a425d3b572f1687a699e21c.scope: Deactivated successfully.
Nov 23 16:17:30 np0005532761 podman[275569]: 2025-11-23 21:17:30.183590984 +0000 UTC m=+0.047955203 container create 2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mclean, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 16:17:30 np0005532761 systemd[1]: Started libpod-conmon-2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295.scope.
Nov 23 16:17:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:17:30 np0005532761 podman[275569]: 2025-11-23 21:17:30.253972521 +0000 UTC m=+0.118336810 container init 2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 16:17:30 np0005532761 podman[275569]: 2025-11-23 21:17:30.25957937 +0000 UTC m=+0.123943589 container start 2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mclean, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 16:17:30 np0005532761 podman[275569]: 2025-11-23 21:17:30.262689502 +0000 UTC m=+0.127053741 container attach 2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:17:30 np0005532761 silly_mclean[275585]: 167 167
Nov 23 16:17:30 np0005532761 systemd[1]: libpod-2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295.scope: Deactivated successfully.
Nov 23 16:17:30 np0005532761 podman[275569]: 2025-11-23 21:17:30.263892494 +0000 UTC m=+0.128256713 container died 2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mclean, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:17:30 np0005532761 podman[275569]: 2025-11-23 21:17:30.168982806 +0000 UTC m=+0.033347055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:17:30 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2b8c367fc5ab21a0689466ba98d43d03ceaaee4837d985a8c64b2a5a853fe5b8-merged.mount: Deactivated successfully.
Nov 23 16:17:30 np0005532761 podman[275569]: 2025-11-23 21:17:30.296951461 +0000 UTC m=+0.161315680 container remove 2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:17:30 np0005532761 systemd[1]: libpod-conmon-2aacf96a0c06343c94d1401bf94caa46313ab5abb6fbf17c2e881ee71564c295.scope: Deactivated successfully.
Nov 23 16:17:30 np0005532761 podman[275608]: 2025-11-23 21:17:30.436134793 +0000 UTC m=+0.036242043 container create 011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 23 16:17:30 np0005532761 systemd[1]: Started libpod-conmon-011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7.scope.
Nov 23 16:17:30 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:17:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821fa80dba341a11e8b020fa20c7fe17435e30e22a20cc2266d349c6574f8d44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821fa80dba341a11e8b020fa20c7fe17435e30e22a20cc2266d349c6574f8d44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821fa80dba341a11e8b020fa20c7fe17435e30e22a20cc2266d349c6574f8d44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:30 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821fa80dba341a11e8b020fa20c7fe17435e30e22a20cc2266d349c6574f8d44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:30 np0005532761 podman[275608]: 2025-11-23 21:17:30.421220877 +0000 UTC m=+0.021328147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:17:30 np0005532761 podman[275608]: 2025-11-23 21:17:30.517355387 +0000 UTC m=+0.117462667 container init 011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:17:30 np0005532761 podman[275608]: 2025-11-23 21:17:30.524272201 +0000 UTC m=+0.124379451 container start 011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_solomon, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:17:30 np0005532761 podman[275608]: 2025-11-23 21:17:30.527481906 +0000 UTC m=+0.127589156 container attach 011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 16:17:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:30.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 892 KiB/s wr, 44 op/s
Nov 23 16:17:31 np0005532761 lvm[275701]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:17:31 np0005532761 lvm[275701]: VG ceph_vg0 finished
Nov 23 16:17:31 np0005532761 vigilant_solomon[275625]: {}
Nov 23 16:17:31 np0005532761 systemd[1]: libpod-011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7.scope: Deactivated successfully.
Nov 23 16:17:31 np0005532761 podman[275608]: 2025-11-23 21:17:31.185015647 +0000 UTC m=+0.785122967 container died 011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_solomon, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:17:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-821fa80dba341a11e8b020fa20c7fe17435e30e22a20cc2266d349c6574f8d44-merged.mount: Deactivated successfully.
Nov 23 16:17:31 np0005532761 podman[275608]: 2025-11-23 21:17:31.223832537 +0000 UTC m=+0.823939787 container remove 011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_solomon, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:17:31 np0005532761 systemd[1]: libpod-conmon-011ce7ba808f1a55a15642e92c6cdcfb536b674a67bc147d54bdda49e04ee4b7.scope: Deactivated successfully.
Nov 23 16:17:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:17:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:17:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:32 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:32 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:32.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:32.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 23 16:17:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:17:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:17:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:17:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:17:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:17:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:17:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:17:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:17:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:34.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:34.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 30 op/s
Nov 23 16:17:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:36 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:17:36.032 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:17:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:17:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:36.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 16:17:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:37.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:17:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:17:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:38.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:38.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:38.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 16:17:40 np0005532761 podman[275749]: 2025-11-23 21:17:40.560877595 +0000 UTC m=+0.071626241 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 16:17:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:40.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 16:17:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:42.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:17:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:44.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:17:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:17:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:47.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:17:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:17:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:17:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:17:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:17:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:17:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:48.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:48.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:17:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:50.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:50.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:17:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:17:51.875 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:17:51.876 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:17:51.876 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:52.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:52.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:17:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:54.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:17:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:56.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:17:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:17:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:17:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:17:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:17:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:17:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:57.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:17:57 np0005532761 podman[275814]: 2025-11-23 21:17:57.57275595 +0000 UTC m=+0.091681013 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Nov 23 16:17:57 np0005532761 podman[275815]: 2025-11-23 21:17:57.583571747 +0000 UTC m=+0.091045866 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 23 16:17:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:17:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:17:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:17:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:17:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:58.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:58.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:17:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:17:58.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:17:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:00.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:02.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:02.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:18:03
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', '.nfs', 'images', 'default.rgw.meta']
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.573316) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683573347, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2116, "num_deletes": 251, "total_data_size": 4250930, "memory_usage": 4335824, "flush_reason": "Manual Compaction"}
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683598666, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4091898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29488, "largest_seqno": 31603, "table_properties": {"data_size": 4082334, "index_size": 6058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19859, "raw_average_key_size": 20, "raw_value_size": 4063205, "raw_average_value_size": 4184, "num_data_blocks": 261, "num_entries": 971, "num_filter_entries": 971, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932480, "oldest_key_time": 1763932480, "file_creation_time": 1763932683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 25401 microseconds, and 7089 cpu microseconds.
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.598713) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4091898 bytes OK
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.598732) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.600615) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.600627) EVENT_LOG_v1 {"time_micros": 1763932683600623, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.600644) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4242263, prev total WAL file size 4242263, number of live WAL files 2.
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.601606) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3995KB)], [65(12MB)]
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683601682, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16732062, "oldest_snapshot_seqno": -1}
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:18:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6231 keys, 14605956 bytes, temperature: kUnknown
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683721756, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14605956, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14564729, "index_size": 24541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 159713, "raw_average_key_size": 25, "raw_value_size": 14453067, "raw_average_value_size": 2319, "num_data_blocks": 987, "num_entries": 6231, "num_filter_entries": 6231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.722042) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14605956 bytes
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.723549) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.2 rd, 121.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.1 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6752, records dropped: 521 output_compression: NoCompression
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.723586) EVENT_LOG_v1 {"time_micros": 1763932683723573, "job": 36, "event": "compaction_finished", "compaction_time_micros": 120197, "compaction_time_cpu_micros": 48271, "output_level": 6, "num_output_files": 1, "total_output_size": 14605956, "num_input_records": 6752, "num_output_records": 6231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683724604, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683726934, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.601474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.727065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.727073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.727077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.727079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:18:03.727081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:04.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:04.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:05 np0005532761 nova_compute[257263]: 2025-11-23 21:18:05.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:07.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:07] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:18:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:07] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:18:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:08.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:08.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:10 np0005532761 nova_compute[257263]: 2025-11-23 21:18:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:10.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:11 np0005532761 podman[275901]: 2025-11-23 21:18:11.534298984 +0000 UTC m=+0.056239583 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 16:18:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:12.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:12.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:13 np0005532761 nova_compute[257263]: 2025-11-23 21:18:13.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:13 np0005532761 nova_compute[257263]: 2025-11-23 21:18:13.036 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:13 np0005532761 nova_compute[257263]: 2025-11-23 21:18:13.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:18:14 np0005532761 nova_compute[257263]: 2025-11-23 21:18:14.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:14 np0005532761 nova_compute[257263]: 2025-11-23 21:18:14.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:14.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:15 np0005532761 nova_compute[257263]: 2025-11-23 21:18:15.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:15 np0005532761 nova_compute[257263]: 2025-11-23 21:18:15.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:18:15 np0005532761 nova_compute[257263]: 2025-11-23 21:18:15.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:18:15 np0005532761 nova_compute[257263]: 2025-11-23 21:18:15.050 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:18:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:16.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:17.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:17] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:18:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:17] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:18:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:18:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:18:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:18.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.059 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.059 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.060 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.060 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.060 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:18:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:18:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160981792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.510 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.644 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.645 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4927MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.645 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.645 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:18:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.716 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.717 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:18:20 np0005532761 nova_compute[257263]: 2025-11-23 21:18:20.731 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:18:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:20.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:18:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600494203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:18:21 np0005532761 nova_compute[257263]: 2025-11-23 21:18:21.157 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:18:21 np0005532761 nova_compute[257263]: 2025-11-23 21:18:21.164 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:18:21 np0005532761 nova_compute[257263]: 2025-11-23 21:18:21.190 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:18:21 np0005532761 nova_compute[257263]: 2025-11-23 21:18:21.192 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:18:21 np0005532761 nova_compute[257263]: 2025-11-23 21:18:21.192 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:18:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:22.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:24 np0005532761 nova_compute[257263]: 2025-11-23 21:18:24.193 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:24.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:26 np0005532761 nova_compute[257263]: 2025-11-23 21:18:26.029 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:26.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:27.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:18:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:27.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:18:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:18:28 np0005532761 systemd-logind[820]: New session 56 of user zuul.
Nov 23 16:18:28 np0005532761 systemd[1]: Started Session 56 of User zuul.
Nov 23 16:18:28 np0005532761 podman[276012]: 2025-11-23 21:18:28.372473993 +0000 UTC m=+0.082888440 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 16:18:28 np0005532761 podman[276010]: 2025-11-23 21:18:28.391172728 +0000 UTC m=+0.109570887 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:18:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:28.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:28.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:28.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:30.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:30.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:30 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16422 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:31 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26435 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:31 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25879 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:31 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16437 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 16:18:31 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1637570107' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 16:18:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:32 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25891 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:18:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:18:32 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:18:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:32.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:32 np0005532761 podman[276517]: 2025-11-23 21:18:32.958554878 +0000 UTC m=+0.050122911 container create 784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 16:18:33 np0005532761 systemd[1]: Started libpod-conmon-784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c.scope.
Nov 23 16:18:33 np0005532761 podman[276517]: 2025-11-23 21:18:32.940022117 +0000 UTC m=+0.031590180 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:18:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:18:33 np0005532761 podman[276517]: 2025-11-23 21:18:33.05699996 +0000 UTC m=+0.148568013 container init 784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:18:33 np0005532761 podman[276517]: 2025-11-23 21:18:33.06302394 +0000 UTC m=+0.154591973 container start 784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_tu, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:18:33 np0005532761 podman[276517]: 2025-11-23 21:18:33.066153612 +0000 UTC m=+0.157721665 container attach 784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:18:33 np0005532761 pensive_tu[276535]: 167 167
Nov 23 16:18:33 np0005532761 systemd[1]: libpod-784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c.scope: Deactivated successfully.
Nov 23 16:18:33 np0005532761 podman[276517]: 2025-11-23 21:18:33.069039219 +0000 UTC m=+0.160607252 container died 784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_tu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:18:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ce39c20ffa952ec6517a2d95bc6caf7d0a2bf9bf881af36f48db14f509384dc8-merged.mount: Deactivated successfully.
Nov 23 16:18:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:18:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:33 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:18:33 np0005532761 podman[276517]: 2025-11-23 21:18:33.109136113 +0000 UTC m=+0.200704156 container remove 784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:18:33 np0005532761 systemd[1]: libpod-conmon-784625a36757ced3ce73b594195aa33cb53455390c0299da9e09ebf8e32d3a8c.scope: Deactivated successfully.
Nov 23 16:18:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:18:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:18:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:18:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:18:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:18:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:18:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:18:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:18:33 np0005532761 podman[276560]: 2025-11-23 21:18:33.28546356 +0000 UTC m=+0.046284679 container create c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 16:18:33 np0005532761 systemd[1]: Started libpod-conmon-c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1.scope.
Nov 23 16:18:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:18:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05cd5961677006eb3847ca9e07328286464ae16236612c8b5ebc4d9d68f4ddc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05cd5961677006eb3847ca9e07328286464ae16236612c8b5ebc4d9d68f4ddc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05cd5961677006eb3847ca9e07328286464ae16236612c8b5ebc4d9d68f4ddc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05cd5961677006eb3847ca9e07328286464ae16236612c8b5ebc4d9d68f4ddc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05cd5961677006eb3847ca9e07328286464ae16236612c8b5ebc4d9d68f4ddc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:33 np0005532761 podman[276560]: 2025-11-23 21:18:33.262324906 +0000 UTC m=+0.023146045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:18:33 np0005532761 podman[276560]: 2025-11-23 21:18:33.366928911 +0000 UTC m=+0.127750050 container init c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:18:33 np0005532761 podman[276560]: 2025-11-23 21:18:33.379625557 +0000 UTC m=+0.140446676 container start c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_morse, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 16:18:33 np0005532761 podman[276560]: 2025-11-23 21:18:33.384229249 +0000 UTC m=+0.145050398 container attach c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 16:18:33 np0005532761 pedantic_morse[276581]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:18:33 np0005532761 pedantic_morse[276581]: --> All data devices are unavailable
Nov 23 16:18:33 np0005532761 systemd[1]: libpod-c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1.scope: Deactivated successfully.
Nov 23 16:18:33 np0005532761 podman[276560]: 2025-11-23 21:18:33.682307176 +0000 UTC m=+0.443128345 container died c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_morse, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 16:18:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b05cd5961677006eb3847ca9e07328286464ae16236612c8b5ebc4d9d68f4ddc-merged.mount: Deactivated successfully.
Nov 23 16:18:33 np0005532761 podman[276560]: 2025-11-23 21:18:33.733967186 +0000 UTC m=+0.494788305 container remove c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:18:33 np0005532761 systemd[1]: libpod-conmon-c66603943010f9953b4765df51d0c8c347bff2f9a38e3a58127ae4eb7b365eb1.scope: Deactivated successfully.
Nov 23 16:18:34 np0005532761 podman[276717]: 2025-11-23 21:18:34.246667006 +0000 UTC m=+0.032644007 container create 8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 16:18:34 np0005532761 systemd[1]: Started libpod-conmon-8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925.scope.
Nov 23 16:18:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:18:34 np0005532761 podman[276717]: 2025-11-23 21:18:34.321500111 +0000 UTC m=+0.107477122 container init 8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:18:34 np0005532761 podman[276717]: 2025-11-23 21:18:34.32676915 +0000 UTC m=+0.112746151 container start 8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 16:18:34 np0005532761 podman[276717]: 2025-11-23 21:18:34.233075765 +0000 UTC m=+0.019052796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:18:34 np0005532761 podman[276717]: 2025-11-23 21:18:34.330323925 +0000 UTC m=+0.116300956 container attach 8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 23 16:18:34 np0005532761 kind_poincare[276734]: 167 167
Nov 23 16:18:34 np0005532761 systemd[1]: libpod-8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925.scope: Deactivated successfully.
Nov 23 16:18:34 np0005532761 podman[276717]: 2025-11-23 21:18:34.331937188 +0000 UTC m=+0.117914189 container died 8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:18:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-829d6c5d2fe375b5769bd612e4b2e8a50aca6709f3935d50c358e97e8c5e3736-merged.mount: Deactivated successfully.
Nov 23 16:18:34 np0005532761 podman[276717]: 2025-11-23 21:18:34.367035079 +0000 UTC m=+0.153012110 container remove 8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 16:18:34 np0005532761 systemd[1]: libpod-conmon-8f0099a65ad6677eeebe2fb4ec3a0fec392bf204383a97b33a7e27857e276925.scope: Deactivated successfully.
Nov 23 16:18:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:18:34 np0005532761 podman[276759]: 2025-11-23 21:18:34.530557876 +0000 UTC m=+0.035636606 container create 4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_liskov, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:18:34 np0005532761 systemd[1]: Started libpod-conmon-4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f.scope.
Nov 23 16:18:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:18:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951ba0558a8f3ae87547d25a0310ea9f3f6930696f9bc75d7b6a099cd122a520/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951ba0558a8f3ae87547d25a0310ea9f3f6930696f9bc75d7b6a099cd122a520/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951ba0558a8f3ae87547d25a0310ea9f3f6930696f9bc75d7b6a099cd122a520/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951ba0558a8f3ae87547d25a0310ea9f3f6930696f9bc75d7b6a099cd122a520/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:34 np0005532761 podman[276759]: 2025-11-23 21:18:34.514823259 +0000 UTC m=+0.019902019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:18:34 np0005532761 podman[276759]: 2025-11-23 21:18:34.613560958 +0000 UTC m=+0.118639688 container init 4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 16:18:34 np0005532761 podman[276759]: 2025-11-23 21:18:34.619110095 +0000 UTC m=+0.124188825 container start 4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_liskov, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 23 16:18:34 np0005532761 podman[276759]: 2025-11-23 21:18:34.62231678 +0000 UTC m=+0.127395510 container attach 4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 16:18:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:34.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:34.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]: {
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:    "1": [
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:        {
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "devices": [
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "/dev/loop3"
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            ],
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "lv_name": "ceph_lv0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "lv_size": "21470642176",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "name": "ceph_lv0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "tags": {
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.cluster_name": "ceph",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.crush_device_class": "",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.encrypted": "0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.osd_id": "1",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.type": "block",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.vdo": "0",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:                "ceph.with_tpm": "0"
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            },
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "type": "block",
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:            "vg_name": "ceph_vg0"
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:        }
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]:    ]
Nov 23 16:18:34 np0005532761 jolly_liskov[276776]: }
Nov 23 16:18:34 np0005532761 systemd[1]: libpod-4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f.scope: Deactivated successfully.
Nov 23 16:18:34 np0005532761 podman[276759]: 2025-11-23 21:18:34.924911017 +0000 UTC m=+0.429989747 container died 4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_liskov, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 16:18:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-951ba0558a8f3ae87547d25a0310ea9f3f6930696f9bc75d7b6a099cd122a520-merged.mount: Deactivated successfully.
Nov 23 16:18:34 np0005532761 podman[276759]: 2025-11-23 21:18:34.962457842 +0000 UTC m=+0.467536582 container remove 4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_liskov, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 23 16:18:34 np0005532761 systemd[1]: libpod-conmon-4ea6e23695cdb17f9636358debcc074f9eba536b91f60248761d64bc0e973d4f.scope: Deactivated successfully.
Nov 23 16:18:35 np0005532761 podman[276902]: 2025-11-23 21:18:35.518841221 +0000 UTC m=+0.037887766 container create a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:18:35 np0005532761 systemd[1]: Started libpod-conmon-a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d.scope.
Nov 23 16:18:35 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:18:35 np0005532761 podman[276902]: 2025-11-23 21:18:35.503045472 +0000 UTC m=+0.022092047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:18:35 np0005532761 podman[276902]: 2025-11-23 21:18:35.605381757 +0000 UTC m=+0.124428322 container init a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:18:35 np0005532761 podman[276902]: 2025-11-23 21:18:35.614683243 +0000 UTC m=+0.133729788 container start a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 16:18:35 np0005532761 podman[276902]: 2025-11-23 21:18:35.61832239 +0000 UTC m=+0.137368955 container attach a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:18:35 np0005532761 romantic_blackwell[276918]: 167 167
Nov 23 16:18:35 np0005532761 podman[276902]: 2025-11-23 21:18:35.621043312 +0000 UTC m=+0.140089857 container died a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 16:18:35 np0005532761 systemd[1]: libpod-a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d.scope: Deactivated successfully.
Nov 23 16:18:35 np0005532761 systemd[1]: var-lib-containers-storage-overlay-bf7d11476990ca44fd77e7ca1dfac2810571a5eef3c75a5f8ea2b833c6f057cf-merged.mount: Deactivated successfully.
Nov 23 16:18:35 np0005532761 podman[276902]: 2025-11-23 21:18:35.663067147 +0000 UTC m=+0.182113702 container remove a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:18:35 np0005532761 systemd[1]: libpod-conmon-a93aeef7274f1a3ed5958af596d6232ee80f750c67b1900ad818ed682ec7ee9d.scope: Deactivated successfully.
Nov 23 16:18:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:35 np0005532761 podman[276949]: 2025-11-23 21:18:35.830532478 +0000 UTC m=+0.046116724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:18:35 np0005532761 podman[276949]: 2025-11-23 21:18:35.942428766 +0000 UTC m=+0.158012982 container create 750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 16:18:36 np0005532761 systemd[1]: Started libpod-conmon-750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057.scope.
Nov 23 16:18:36 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:18:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3dc7812d968d65157bfb05adc4c94eb5713ca590cbcf3a9469d3e40ccd31c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3dc7812d968d65157bfb05adc4c94eb5713ca590cbcf3a9469d3e40ccd31c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3dc7812d968d65157bfb05adc4c94eb5713ca590cbcf3a9469d3e40ccd31c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:36 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3dc7812d968d65157bfb05adc4c94eb5713ca590cbcf3a9469d3e40ccd31c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:18:36 np0005532761 podman[276949]: 2025-11-23 21:18:36.03001997 +0000 UTC m=+0.245604186 container init 750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:18:36 np0005532761 podman[276949]: 2025-11-23 21:18:36.041615007 +0000 UTC m=+0.257199263 container start 750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 16:18:36 np0005532761 podman[276949]: 2025-11-23 21:18:36.045309586 +0000 UTC m=+0.260893802 container attach 750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:18:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:18:36 np0005532761 lvm[277043]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:18:36 np0005532761 lvm[277043]: VG ceph_vg0 finished
Nov 23 16:18:36 np0005532761 strange_driscoll[276965]: {}
Nov 23 16:18:36 np0005532761 systemd[1]: libpod-750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057.scope: Deactivated successfully.
Nov 23 16:18:36 np0005532761 systemd[1]: libpod-750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057.scope: Consumed 1.164s CPU time.
Nov 23 16:18:36 np0005532761 podman[276949]: 2025-11-23 21:18:36.787699548 +0000 UTC m=+1.003283764 container died 750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_driscoll, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:18:36 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ff3dc7812d968d65157bfb05adc4c94eb5713ca590cbcf3a9469d3e40ccd31c0-merged.mount: Deactivated successfully.
Nov 23 16:18:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:18:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:18:36 np0005532761 podman[276949]: 2025-11-23 21:18:36.846793165 +0000 UTC m=+1.062377391 container remove 750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_driscoll, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:18:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:36.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:36 np0005532761 systemd[1]: libpod-conmon-750653acfde209be6c801f8c70a684c2674ab946af4d5e04d2a69624c24de057.scope: Deactivated successfully.
Nov 23 16:18:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:18:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:18:36 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:37.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:37 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:37 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:37 np0005532761 ovs-vsctl[277112]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 23 16:18:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:18:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:18:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:18:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:38.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:38.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:38.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:39 np0005532761 virtqemud[256805]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 23 16:18:39 np0005532761 virtqemud[256805]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 23 16:18:39 np0005532761 virtqemud[256805]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 16:18:39 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26456 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:39 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: cache status {prefix=cache status} (starting...)
Nov 23 16:18:39 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:39 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:18:39 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:18:39 np0005532761 lvm[277426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:18:39 np0005532761 lvm[277426]: VG ceph_vg0 finished
Nov 23 16:18:39 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26468 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:39 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: client ls {prefix=client ls} (starting...)
Nov 23 16:18:39 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26480 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: damage ls {prefix=damage ls} (starting...)
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16479 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump loads {prefix=dump loads} (starting...)
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:18:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777815802' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:18:40 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26498 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:40 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25912 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:40.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:40.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 23 16:18:40 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:18:40 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:18:40 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16497 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:41 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25927 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:18:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030825737' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:41 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26534 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16512 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 23 16:18:41 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700110618' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:41 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25942 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:41 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26558 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16524 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: ops {prefix=ops} (starting...)
Nov 23 16:18:41 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:41 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25954 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:18:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:18:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 23 16:18:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602067548' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 16:18:42 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:18:42 np0005532761 podman[277856]: 2025-11-23 21:18:42.556490746 +0000 UTC m=+0.067888221 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 16:18:42 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: session ls {prefix=session ls} (starting...)
Nov 23 16:18:42 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:18:42 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25975 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:42 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:18:42 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990081815' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:18:42 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16569 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:42 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: status {prefix=status} (starting...)
Nov 23 16:18:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:42.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:42.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:43 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.25987 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3207484633' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26630 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T21:18:43.105+0000 7f09354b6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:18:43 np0005532761 ceph-mgr[74869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741792213' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201040668' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287440862' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:18:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743513132' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16632 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T21:18:43.963+0000 7f09354b6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:18:43 np0005532761 ceph-mgr[74869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:18:44 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26681 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2016517135' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173105010' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 16:18:44 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26041 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T21:18:44.497+0000 7f09354b6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:18:44 np0005532761 ceph-mgr[74869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:18:44 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26705 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:44.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788270769' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 23 16:18:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749544287' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16680 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26738 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:18:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833699972' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26756 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26074 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16704 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:18:45 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826487999' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:18:45 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26095 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16725 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 1982464 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905977 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 1982464 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a273ddc00 session 0x559a28175680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 1974272 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 1966080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 1957888 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 1957888 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905977 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 1957888 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 1949696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 1949696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 1949696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 1941504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905977 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 1941504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 1933312 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.509483337s of 63.613895416s, submitted: 2
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 1908736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 1908736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 1900544 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906109 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 1892352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 1884160 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1875968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 1875968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 1867776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907637 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 1867776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 1835008 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 1826816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 1826816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 1818624 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907637 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.590957642s of 13.686134338s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 1802240 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 1802240 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 1794048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 1794048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 1794048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907337 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 1785856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 1785856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 1785856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 1777664 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 1777664 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 1769472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 1769472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 1769472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 1761280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 1761280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 1753088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1736704 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 1736704 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1728512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 1728512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1720320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 1720320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1712128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1712128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 1712128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 1703936 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1695744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 1695744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1687552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 1687552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1679360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1679360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 1679360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1671168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 1671168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 1662976 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1646592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 1646592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 1638400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 1638400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 1638400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 1630208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 1630208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1622016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 1622016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 1613824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 1605632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1597440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1597440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 1597440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 1589248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 1589248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 1581056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 1581056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 1581056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 1572864 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 1556480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1548288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1548288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 1548288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 1540096 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 1540096 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1531904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1531904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 1531904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1523712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 1523712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 1515520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 1515520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 1515520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 1507328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 1499136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1490944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1490944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 1490944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1482752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 1482752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 1474560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 1474560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 1474560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 1466368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 1466368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 1458176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 1458176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 1458176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 1449984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 1449984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 1441792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 1441792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 1433600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 1433600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 1425408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 1425408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507000 session 0x559a258bf4a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a260d94a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 1417216 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 1417216 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 1409024 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 1409024 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 1400832 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 1400832 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 1400832 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907489 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 1392640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 1392640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 1384448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 103.344543457s of 103.352539062s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 1368064 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1359872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907621 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 1359872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 1343488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 1343488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1327104 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 1302528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909149 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 1286144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 1269760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 1261568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 1261568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 1261568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909149 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.000168800s of 12.038130760s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 1236992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 1228800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 1220608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 1204224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 1204224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908410 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 1196032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 1196032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 1196032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 1187840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ec000 session 0x559a27682d20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 1187840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908410 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 1179648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 1179648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 1179648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 1171456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 1171456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908410 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 1163264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 1163264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.102205276s of 19.110595703s, submitted: 2
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 1146880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908542 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 1146880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1138688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1105920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1097728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1097728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911582 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 1081344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 1081344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 1081344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1056768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1056768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910823 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1056768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1048576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1048576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1040384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910975 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.776401520s of 16.817495346s, submitted: 12
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910843 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910843 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910843 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a260d65a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910843 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 933888 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910843 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.911348343s of 27.204196930s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910975 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 892928 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910991 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 860160 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 860160 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 851968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 851968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 851968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909641 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 843776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.883573532s of 15.971766472s, submitted: 12
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 811008 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.4 total, 600.0 interval#012Cumulative writes: 7175 writes, 29K keys, 7175 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7175 writes, 1282 syncs, 5.60 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7175 writes, 29K keys, 7175 commit groups, 1.0 writes per commit group, ingest: 20.52 MB, 0.03 MB/s#012Interval WAL: 7175 writes, 1282 syncs, 5.60 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 548864 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 548864 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 516096 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 516096 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 458752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 458752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 450560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 450560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 450560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a27444400 session 0x559a2814e1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 401408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 393216 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 385024 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 385024 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 376832 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 376832 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 368640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 368640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 368640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 100.746490479s of 101.052162170s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 360448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 360448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 352256 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909809 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 303104 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909809 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 303104 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 294912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.578799248s of 12.660467148s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 286720 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909509 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 270336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 270336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 270336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 262144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 262144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 262144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 253952 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 253952 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a252c8400 session 0x559a27a0a000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.048820496s of 38.052608490s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,1])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1261568 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 163840 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 40960 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 40960 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 24576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 24576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 24576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 24576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 24576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909661 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.833550453s of 10.158369064s, submitted: 226
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 16384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 16384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 16384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 16384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a271d1400 session 0x559a256f25a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 16384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909809 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a273ddc00 session 0x559a274de3c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909677 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 0 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909677 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.419290543s of 14.591596603s, submitted: 5
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909941 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 974848 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 950272 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 950272 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 950272 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909957 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 942080 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 933888 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.045377731s of 12.271936417s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 876544 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 876544 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a2814e780
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 876544 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911321 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 876544 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 876544 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 876544 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 868352 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 868352 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911173 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 868352 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 868352 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 868352 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 868352 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 868352 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911173 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.981966972s of 13.014721870s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 843776 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 843776 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 843776 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912833 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912833 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.986861229s of 12.238460541s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912226 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a279441e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912094 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912094 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.342787743s of 15.390837669s, submitted: 2
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912226 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a27444400 session 0x559a256f1a40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915266 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 794624 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915266 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 794624 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.992933273s of 12.063241959s, submitted: 10
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 794624 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 794624 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 794624 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 1835008 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914791 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 1835008 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 1835008 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 1794048 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 1794048 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 1794048 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916187 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.486403465s of 12.832937241s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915428 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ecc00 session 0x559a27e3c960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914857 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914857 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.806301117s of 16.475910187s, submitted: 4
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914989 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915005 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914246 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.799649239s of 13.969947815s, submitted: 10
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914266 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914266 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914266 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a281743c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914266 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914266 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.342958450s of 25.345823288s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914398 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915926 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.071847916s of 10.678319931s, submitted: 10
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915626 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a27945e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.203216553s of 58.206161499s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915910 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917438 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.999002457s of 12.150321007s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916831 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916699 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a273ddc00 session 0x559a2552e1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916699 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916699 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.842432022s of 17.849073410s, submitted: 2
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916831 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 1613824 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919871 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919264 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.914249420s of 14.992810249s, submitted: 13
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed000 session 0x559a256f4d20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a271d1400 session 0x559a28175680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919132 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919132 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.988170624s of 10.991490364s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919264 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,1])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919396 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a273dcc00 session 0x559a272352c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.203738213s of 11.307200432s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918805 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ec000 session 0x559a28126000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 1531904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 1531904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 1531904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.309871674s of 44.328369141s, submitted: 5
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 1482752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 1482752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 1482752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.135511398s of 11.187865257s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917798 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a2652af00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.318428040s of 29.321868896s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: mgrc ms_handle_reset ms_handle_reset con 0x559a2604c800
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/844402651
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/844402651,v1:192.168.122.100:6801/844402651]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: mgrc handle_mgr_configure stats_period=5
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.810980797s of 13.847840309s, submitted: 10
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917798 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a27deaf00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.371376038s of 20.374578476s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919610 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ecc00 session 0x559a2654c1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919610 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:18:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642177791' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.801570892s of 16.847188950s, submitted: 10
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919310 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919594 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921122 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.872009277s of 11.917662621s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920515 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1261568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1261568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1261568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.4 total, 600.0 interval#012Cumulative writes: 7984 writes, 31K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7984 writes, 1682 syncs, 4.75 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 809 writes, 1426 keys, 809 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 809 writes, 400 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed400 session 0x559a264223c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread fragmentation_score=0.000028 took=0.000040s
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.169040680s of 53.178115845s, submitted: 3
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920515 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922043 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921436 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.782189369s of 13.827088356s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed800 session 0x559a28391680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.232078552s of 33.253677368s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921452 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 1155072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 1155072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921452 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1138688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1138688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1138688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.656509399s of 10.978686333s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921152 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a28ac9e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920713 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920713 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920713 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.745338440s of 18.042085648s, submitted: 3
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922373 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923885 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.843791962s of 13.889540672s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923585 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923737 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923737 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a271d1400 session 0x559a28ad2000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26795 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.843379974s of 14.849747658s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 1105920 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923809 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1007616 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923737 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 909312 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 909312 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.654523849s of 10.987039566s, submitted: 220
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923885 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1941504 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1941504 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923885 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1941504 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1933312 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ec000 session 0x559a2569c780
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922687 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.214892387s of 11.249080658s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922687 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.709618568s of 11.720647812s, submitted: 3
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1916928 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1916928 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.265705109s of 12.293758392s, submitted: 7
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922403 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a28ab8960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a28c64000 session 0x559a2572fe00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.887575150s of 30.890459061s, submitted: 1
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924347 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1826816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1826816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1826816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.551795959s of 11.675850868s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924347 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 1785856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 1785856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 1785856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a283950e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924047 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923608 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.047493935s of 14.154020309s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923624 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.941125870s of 10.020229340s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923624 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923324 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a271d1400 session 0x559a2569da40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.143291473s of 46.153369904s, submitted: 3
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923608 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925136 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed800 session 0x559a28c674a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925136 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925136 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.952695847s of 17.990190506s, submitted: 10
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0xfd14a/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987166 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 18448384 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 145 ms_handle_reset con 0x559a26529c00 session 0x559a28ac9c20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 18415616 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18350080 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 146 ms_handle_reset con 0x559a271d1400 session 0x559a28ac9e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbe58000/0x0/0x4ffc00000, data 0x9013a2/0x9b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 18333696 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024111 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.934324265s of 11.164364815s, submitted: 47
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb9e6000/0x0/0x4ffc00000, data 0xd734aa/0xe26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027270 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027138 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a26507400 session 0x559a274de960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027138 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027138 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.150621414s of 23.169740677s, submitted: 16
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026430 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026446 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 18382848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025687 data_alloc: 218103808 data_used: 110592
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025839 data_alloc: 218103808 data_used: 114688
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a28c64c00 session 0x559a27e3bc20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a28c64800 session 0x559a266a7860
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a26507400 session 0x559a28482780
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18350080 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.736700058s of 17.784509659s, submitted: 11
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a26529c00 session 0x559a28482b40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18350080 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a271d1400 session 0x559a279501e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 10330112 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a28c64c00 session 0x559a27950960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 10330112 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049081 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 10330112 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb9df000/0x0/0x4ffc00000, data 0xd77568/0xe2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c64400 session 0x559a2814ef00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a26507400 session 0x559a2814f0e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a26529c00 session 0x559a2814fe00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a271d1400 session 0x559a269d4000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c64000 session 0x559a283734a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c65c00 session 0x559a28395c20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9db000/0x0/0x4ffc00000, data 0xd796b8/0xe30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053807 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c64c00 session 0x559a264223c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9db000/0x0/0x4ffc00000, data 0xd796b8/0xe30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c65400 session 0x559a28126b40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9db000/0x0/0x4ffc00000, data 0xd796b8/0xe30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c65800 session 0x559a28126780
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055621 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0xd796c8/0xe31000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0xd796c8/0xe31000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.069388390s of 16.428546906s, submitted: 19
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058211 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d7000/0x0/0x4ffc00000, data 0xd7b69a/0xe34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058211 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d7000/0x0/0x4ffc00000, data 0xd7b69a/0xe34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a279c41e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a280ab680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 10166272 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.318939209s of 31.515851974s, submitted: 19
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079280 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a2742b860
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a28ab8000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64c00 session 0x559a28ad2960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65400 session 0x559a283910e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65800 session 0x559a26424f00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 9756672 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 9756672 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a27e3ab40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 9674752 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 9674752 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb815000/0x0/0x4ffc00000, data 0xf3e69a/0xff7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a256f25a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 9674752 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb815000/0x0/0x4ffc00000, data 0xf3e69a/0xff7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64c00 session 0x559a27de9e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085882 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65400 session 0x559a28ad3e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 9691136 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 9691136 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088638 data_alloc: 218103808 data_used: 7491584
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088638 data_alloc: 218103808 data_used: 7491584
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.456800461s of 17.521381378s, submitted: 23
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,1,1])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 7118848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116592 data_alloc: 218103808 data_used: 7553024
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x106f69a/0x1128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116608 data_alloc: 218103808 data_used: 7553024
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x106f69a/0x1128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d1400 session 0x559a28ad2f00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.697155952s of 10.029232979s, submitted: 58
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97746944 unmapped: 4980736 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a27950d20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071094 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0xd8469a/0xe3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a26424d20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28ad2000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a2839c000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0xd8469a/0xe3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ed400 session 0x559a279514a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067049 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a283905a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067049 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067049 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.673070908s of 18.772710800s, submitted: 31
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067197 data_alloc: 218103808 data_used: 6930432
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a2839c1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27defa40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28c96000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64c00 session 0x559a27944d20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a260d65a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a256f45a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a28ab83c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a2742bc20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65400 session 0x559a2654cd20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97951744 unmapped: 19472384 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142695 data_alloc: 218103808 data_used: 6930432
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 19456000 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 19456000 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 19374080 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 19374080 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.757060051s of 12.911256790s, submitted: 44
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a256f03c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97812480 unmapped: 19611648 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143476 data_alloc: 218103808 data_used: 6942720
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97828864 unmapped: 19595264 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 12050432 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 12050432 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 12050432 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203384 data_alloc: 234881024 data_used: 15896576
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203384 data_alloc: 234881024 data_used: 15896576
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.322823524s of 12.350434303s, submitted: 9
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 10035200 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1,1,2])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 9838592 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 8495104 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 8462336 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 8462336 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 8454144 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 8421376 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 8421376 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 8421376 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109010944 unmapped: 8413184 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109010944 unmapped: 8413184 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109027328 unmapped: 8396800 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109027328 unmapped: 8396800 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 8364032 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 8364032 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251896 data_alloc: 234881024 data_used: 16195584
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 8355840 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 8347648 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.471483231s of 25.635581970s, submitted: 53
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a2845c3c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a28c661e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 15810560 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28ad21e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.068742752s of 21.379354477s, submitted: 53
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27490c00 session 0x559a27ded2c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa53a000/0x0/0x4ffc00000, data 0x107b67a/0x1132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101558 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa53a000/0x0/0x4ffc00000, data 0x107b67a/0x1132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa53a000/0x0/0x4ffc00000, data 0x107b67a/0x1132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a28c94960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103363 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a28c67a40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.837812424s of 10.922425270s, submitted: 19
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a260d61e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 29384704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187558 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d5000/0x0/0x4ffc00000, data 0x1be067a/0x1c97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 29384704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 29376512 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 29376512 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d5000/0x0/0x4ffc00000, data 0x1be067a/0x1c97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 29376512 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d5000/0x0/0x4ffc00000, data 0x1be067a/0x1c97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a2814e960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 29114368 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192303 data_alloc: 218103808 data_used: 7024640
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 23003136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99b0000/0x0/0x4ffc00000, data 0x1c0469d/0x1cbc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291407 data_alloc: 234881024 data_used: 21819392
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99b0000/0x0/0x4ffc00000, data 0x1c0469d/0x1cbc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291863 data_alloc: 234881024 data_used: 21831680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 18300928 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.519432068s of 16.630643845s, submitted: 19
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 14647296 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f93d8000/0x0/0x4ffc00000, data 0x21dc69d/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 15482880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f935e000/0x0/0x4ffc00000, data 0x225669d/0x230e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359559 data_alloc: 234881024 data_used: 22806528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 14860288 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16400384 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357143 data_alloc: 234881024 data_used: 22876160
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f933d000/0x0/0x4ffc00000, data 0x227769d/0x232f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f933d000/0x0/0x4ffc00000, data 0x227769d/0x232f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.680171967s of 14.030892372s, submitted: 63
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357415 data_alloc: 234881024 data_used: 22876160
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 16302080 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 16302080 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cad800 session 0x559a260d63c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9334000/0x0/0x4ffc00000, data 0x228069d/0x2338000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 16039936 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc6000/0x0/0x4ffc00000, data 0x29ee69d/0x2aa6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413257 data_alloc: 234881024 data_used: 22876160
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a2652af00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27682000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a28ac90e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28ac9e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413009 data_alloc: 234881024 data_used: 22876160
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 15949824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458305 data_alloc: 251658240 data_used: 27918336
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.607965469s of 18.688673019s, submitted: 14
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 10715136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 10715136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458305 data_alloc: 251658240 data_used: 27918336
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 10715136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 10706944 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 10362880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1486335 data_alloc: 251658240 data_used: 28082176
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f881f000/0x0/0x4ffc00000, data 0x2d9469d/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1486335 data_alloc: 251658240 data_used: 28082176
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f881f000/0x0/0x4ffc00000, data 0x2d9469d/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cadc00 session 0x559a256f2000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f881f000/0x0/0x4ffc00000, data 0x2d9469d/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.759673119s of 13.877911568s, submitted: 25
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a27982d20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.4 total, 600.0 interval#012Cumulative writes: 9526 writes, 35K keys, 9526 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9526 writes, 2376 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1542 writes, 4300 keys, 1542 commit groups, 1.0 writes per commit group, ingest: 3.82 MB, 0.01 MB/s#012Interval WAL: 1542 writes, 694 syncs, 2.22 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362263 data_alloc: 234881024 data_used: 21254144
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a260d94a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a276823c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9331000/0x0/0x4ffc00000, data 0x228369d/0x233b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27e3b4a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a279830e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a27a0be00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a27e3a000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27bbc780
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.692840576s of 34.780414581s, submitted: 28
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27bbc1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a264254a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a269d43c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a274def00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a274ded20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d6000/0x0/0x4ffc00000, data 0xedd6ec/0xf96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118996 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27427c20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc000 session 0x559a2569de00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d6000/0x0/0x4ffc00000, data 0xedd6ec/0xf96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 24543232 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a274261e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a264230e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 24469504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 24444928 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120810 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120810 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.602138519s of 17.687946320s, submitted: 33
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24576000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129046 data_alloc: 218103808 data_used: 7233536
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 23732224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa534000/0x0/0x4ffc00000, data 0x106f6fc/0x1129000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139544 data_alloc: 218103808 data_used: 7049216
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa534000/0x0/0x4ffc00000, data 0x106f6fc/0x1129000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134264 data_alloc: 218103808 data_used: 7053312
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x10906fc/0x114a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x10906fc/0x114a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.840806007s of 14.135817528s, submitted: 62
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134512 data_alloc: 218103808 data_used: 7053312
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x10966fc/0x1150000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x10966fc/0x1150000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x10a46fc/0x115e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135448 data_alloc: 218103808 data_used: 7061504
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x10a46fc/0x115e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.418850899s of 11.506878853s, submitted: 4
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a2572fa40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a256f3a40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157513 data_alloc: 218103808 data_used: 7061504
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x10a46fc/0x115e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fcc00 session 0x559a274de960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27bbcd20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a260d9c20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a27944b40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a2814e5a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157513 data_alloc: 218103808 data_used: 7061504
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fd000 session 0x559a26424f00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27e3c960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167829 data_alloc: 218103808 data_used: 8433664
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.438807487s of 11.514143944s, submitted: 24
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa38f000/0x0/0x4ffc00000, data 0x12236fc/0x12dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167725 data_alloc: 218103808 data_used: 8433664
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa38f000/0x0/0x4ffc00000, data 0x12236fc/0x12dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa38f000/0x0/0x4ffc00000, data 0x12236fc/0x12dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 20389888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e82000/0x0/0x4ffc00000, data 0x17226fc/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 20389888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e70000/0x0/0x4ffc00000, data 0x17426fc/0x17fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214197 data_alloc: 234881024 data_used: 9682944
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e70000/0x0/0x4ffc00000, data 0x17426fc/0x17fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.319500923s of 11.676329613s, submitted: 95
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e6d000/0x0/0x4ffc00000, data 0x17456fc/0x17ff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e6d000/0x0/0x4ffc00000, data 0x17456fc/0x17ff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214593 data_alloc: 234881024 data_used: 9682944
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e68000/0x0/0x4ffc00000, data 0x174a6fc/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214529 data_alloc: 234881024 data_used: 9682944
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.610642433s of 11.667161942s, submitted: 5
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a279443c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214553 data_alloc: 234881024 data_used: 9682944
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e65000/0x0/0x4ffc00000, data 0x174d6fc/0x1807000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e65000/0x0/0x4ffc00000, data 0x174d6fc/0x1807000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217213 data_alloc: 234881024 data_used: 9666560
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x17526fc/0x180c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27445c00 session 0x559a266a6780
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27444800 session 0x559a2654de00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x17526fc/0x180c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217213 data_alloc: 234881024 data_used: 9666560
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273ddc00 session 0x559a279c4960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.261064529s of 11.288570404s, submitted: 18
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273dd400 session 0x559a258be000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e5d000/0x0/0x4ffc00000, data 0x17556fc/0x180f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 19988480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215373 data_alloc: 234881024 data_used: 9666560
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 19988480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 19988480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e57000/0x0/0x4ffc00000, data 0x175b6fc/0x1815000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 19947520 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e57000/0x0/0x4ffc00000, data 0x175b6fc/0x1815000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,1])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 19824640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e57000/0x0/0x4ffc00000, data 0x175b6fc/0x1815000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215309 data_alloc: 234881024 data_used: 9666560
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.961704254s of 10.976827621s, submitted: 243
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a44000/0x0/0x4ffc00000, data 0x175e6fc/0x1818000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215333 data_alloc: 234881024 data_used: 9666560
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a44000/0x0/0x4ffc00000, data 0x175e6fc/0x1818000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a44000/0x0/0x4ffc00000, data 0x175e6fc/0x1818000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217369 data_alloc: 234881024 data_used: 9654272
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a41000/0x0/0x4ffc00000, data 0x17616fc/0x181b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.807581902s of 12.267666817s, submitted: 20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215849 data_alloc: 234881024 data_used: 9654272
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a3c000/0x0/0x4ffc00000, data 0x17666fc/0x1820000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a280e34a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273dd400 session 0x559a274270e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145873 data_alloc: 218103808 data_used: 7045120
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x10d36fc/0x118d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x10d36fc/0x118d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a25580d20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27e3ab40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.195116997s of 10.299222946s, submitted: 33
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27e3da40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 21831680 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 21831680 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 21831680 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.175165176s of 22.234811783s, submitted: 17
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a28ac92c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138546 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 21397504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273dd400 session 0x559a256f10e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x10c367a/0x117a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 21397504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 21397504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 20742144 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 20742144 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a2569da40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2803be00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161579 data_alloc: 234881024 data_used: 10174464
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 21610496 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a27bbd2c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116093 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a255801e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a25581e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27befc00 session 0x559a2742c960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27a0a1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.908199310s of 18.412792206s, submitted: 32
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2742d680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a283901e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143635 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27a0ab40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a281b4c00 session 0x559a280e3680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a274285a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa19c000/0x0/0x4ffc00000, data 0x10086dc/0x10c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2572e1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 22339584 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144444 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 22298624 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa19c000/0x0/0x4ffc00000, data 0x10086dc/0x10c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109404160 unmapped: 21741568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a27951680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27de9a40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21774336 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21774336 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a281b5400 session 0x559a27de7e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.809024811s of 37.447509766s, submitted: 86
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a264234a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a256f2f00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a2845cb40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2803a3c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a281b5000 session 0x559a26539680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 29605888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 29605888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 29605888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194133 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x16f867a/0x17af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a2654c960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196070 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262190 data_alloc: 234881024 data_used: 16707584
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262190 data_alloc: 234881024 data_used: 16707584
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.052062988s of 19.211286545s, submitted: 24
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19505152 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 16826368 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 16408576 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 16408576 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371334 data_alloc: 234881024 data_used: 18436096
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 16228352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 16228352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 16228352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 16203776 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 16203776 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371350 data_alloc: 234881024 data_used: 18436096
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 16138240 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372414 data_alloc: 234881024 data_used: 18464768
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 16130048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 16130048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.596179008s of 15.956642151s, submitted: 86
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2803a1e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 25747456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a28ad32c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.090698242s of 22.205930710s, submitted: 37
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a28ad3860
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a284703c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 26337280 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a2839cf00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2552f4a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a27decd20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181157 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e97000/0x0/0x4ffc00000, data 0x130e67a/0x13c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a28ab9860
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181157 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27e38960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a258bfe00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2552ef00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 26443776 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 25337856 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222158 data_alloc: 234881024 data_used: 12767232
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222158 data_alloc: 234881024 data_used: 12767232
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27431680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cadc00 session 0x559a260d9c20
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cac000 session 0x559a260d9a40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cac000 session 0x559a260d94a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.390014648s of 17.499362946s, submitted: 25
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a274270e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a27982000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a279825a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cadc00 session 0x559a279823c0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a279830e0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e94000/0x0/0x4ffc00000, data 0x130e6d6/0x13c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 24739840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x179170f/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 18194432 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 21217280 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351827 data_alloc: 234881024 data_used: 13385728
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 18694144 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 18694144 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x211c70f/0x21d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353347 data_alloc: 234881024 data_used: 13520896
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a256f2000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x211c70f/0x21d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x211c70f/0x21d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 19898368 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 19472384 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 16842752 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 16842752 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380455 data_alloc: 234881024 data_used: 18247680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 16842752 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380455 data_alloc: 234881024 data_used: 18247680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122765312 unmapped: 16777216 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122798080 unmapped: 16744448 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122798080 unmapped: 16744448 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.176345825s of 21.564163208s, submitted: 117
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123240448 unmapped: 16302080 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122396672 unmapped: 17145856 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409185 data_alloc: 234881024 data_used: 18366464
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122396672 unmapped: 17145856 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d66000/0x0/0x4ffc00000, data 0x243c70f/0x24f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411779 data_alloc: 234881024 data_used: 18366464
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d66000/0x0/0x4ffc00000, data 0x243c70f/0x24f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d66000/0x0/0x4ffc00000, data 0x243c70f/0x24f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411779 data_alloc: 234881024 data_used: 18366464
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 16998400 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.063998222s of 12.514191628s, submitted: 29
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d64000/0x0/0x4ffc00000, data 0x243d70f/0x24f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411635 data_alloc: 234881024 data_used: 18378752
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2569de00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cac000 session 0x559a2845c960
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d64000/0x0/0x4ffc00000, data 0x243d70f/0x24f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a2610fc00 session 0x559a27e3d680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f92b7000/0x0/0x4ffc00000, data 0x1c9d69d/0x1d55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a2654cf00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316547 data_alloc: 234881024 data_used: 13529088
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.850693703s of 10.007095337s, submitted: 51
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a2610fc00 session 0x559a2742b680
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b69d/0xe33000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 23691264 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 23691264 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 23691264 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.863340378s of 25.891838074s, submitted: 8
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a28175e00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a27950f00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2552f4a0
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27de6780
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2742c000
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26f000/0x0/0x4ffc00000, data 0xf3667a/0xfed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26f000/0x0/0x4ffc00000, data 0xf3667a/0xfed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177188 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26f000/0x0/0x4ffc00000, data 0xf3667a/0xfed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 23863296 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 23863296 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a260d7860
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 23863296 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179125 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26e000/0x0/0x4ffc00000, data 0xf3669d/0xfee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26e000/0x0/0x4ffc00000, data 0xf3669d/0xfee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189765 data_alloc: 218103808 data_used: 8372224
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26e000/0x0/0x4ffc00000, data 0xf3669d/0xfee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189765 data_alloc: 218103808 data_used: 8372224
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.808490753s of 19.874235153s, submitted: 26
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120709120 unmapped: 18833408 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 19079168 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 19079168 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9870000/0x0/0x4ffc00000, data 0x192c69d/0x19e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 19070976 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269811 data_alloc: 218103808 data_used: 8970240
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 18989056 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 18989056 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 18989056 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9859000/0x0/0x4ffc00000, data 0x194b69d/0x1a03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261691 data_alloc: 218103808 data_used: 8970240
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9859000/0x0/0x4ffc00000, data 0x194b69d/0x1a03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9859000/0x0/0x4ffc00000, data 0x194b69d/0x1a03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.244734764s of 12.532417297s, submitted: 93
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a2610fc00 session 0x559a27de8b40
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27decf00
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 22413312 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 22413312 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'config diff' '{prefix=config diff}'
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'config show' '{prefix=config show}'
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 22331392 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 22192128 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:18:46 np0005532761 ceph-osd[83114]: do_command 'log dump' '{prefix=log dump}'
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26107 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16746 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26816 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:18:46 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/294278415' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:18:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:46.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26119 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:46 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16764 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26831 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:47.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:18:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3526342823' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26134 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16785 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 23 16:18:47 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948095813' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26149 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:47 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26873 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26161 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:18:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16836 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26894 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 23 16:18:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181781317' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26173 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16863 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:48.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:48.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 23 16:18:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1166756432' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 16:18:48 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26179 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16878 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 16:18:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538972713' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 16:18:49 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26197 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.16893 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 23 16:18:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080332708' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 16:18:49 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26209 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 23 16:18:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200520148' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2483345435' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 16:18:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349483395' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969985076' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:50.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 16:18:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2106951192' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773834112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393723153' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 16:18:51 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27077 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3261763036' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 23 16:18:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545793442' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 16:18:51 np0005532761 systemd[1]: Starting Hostname Service...
Nov 23 16:18:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:18:51.876 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:18:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:18:51.876 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:18:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:18:51.876 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:18:51 np0005532761 systemd[1]: Started Hostname Service.
Nov 23 16:18:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27101 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26296 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17043 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27131 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17067 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:52 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 23 16:18:52 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369750004' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26317 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:52.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:52 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27152 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17091 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26329 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26338 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27179 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17118 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 16:18:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840698566' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27197 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26353 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17136 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:53 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 23 16:18:53 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279828983' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26368 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17163 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27221 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:18:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:18:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:18:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859332303' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17196 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26383 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27248 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:54.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:54 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 23 16:18:54 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502980523' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:18:55 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17217 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26407 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17250 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27302 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:18:55 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26425 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 23 16:18:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1696057665' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 16:18:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:56 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26446 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:18:56 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17283 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 23 16:18:56 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173198822' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 16:18:56 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26470 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:56.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:56.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:18:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:18:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:18:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:18:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:18:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 23 16:18:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10812519' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 16:18:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:57.211Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:18:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:57.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 23 16:18:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322776160' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 16:18:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:18:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:18:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:18:57 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27377 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 23 16:18:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328676861' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 16:18:58 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17337 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:18:58 np0005532761 podman[280265]: 2025-11-23 21:18:58.5424947 +0000 UTC m=+0.057395064 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 23 16:18:58 np0005532761 podman[280264]: 2025-11-23 21:18:58.573766879 +0000 UTC m=+0.091365674 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:18:58 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 23 16:18:58 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2288918010' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 23 16:18:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:18:58 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 7216 writes, 32K keys, 7216 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7216 writes, 7216 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1560 writes, 6965 keys, 1560 commit groups, 1.0 writes per commit group, ingest: 11.81 MB, 0.02 MB/s#012Interval WAL: 1560 writes, 1560 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     85.8      0.61              0.13        18    0.034       0      0       0.0       0.0#012  L6      1/0   13.93 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     73.2     62.6      3.62              0.60        17    0.213     94K   9488       0.0       0.0#012 Sum      1/0   13.93 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     62.7     66.0      4.22              0.74        35    0.121     94K   9488       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.0     82.8     84.6      0.81              0.20         8    0.101     26K   2583       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     73.2     62.6      3.62              0.60        17    0.213     94K   9488       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     86.4      0.60              0.13        17    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.1 total, 600.0 interval#012Flush(GB): cumulative 0.051, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.12 MB/s write, 0.26 GB read, 0.11 MB/s read, 4.2 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf3f93d350#2 capacity: 304.00 MB usage: 23.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000192 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1474,23.03 MB,7.57684%) FilterBlock(36,275.05 KB,0.0883554%) IndexBlock(36,482.52 KB,0.155002%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 16:18:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:18:58.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:18:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:58.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:18:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:58.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:59 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26500 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 23 16:18:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713685342' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 23 16:18:59 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27410 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:59 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17364 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:18:59 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 23 16:18:59 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656785731' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 23 16:19:00 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27437 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:00 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17388 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:00 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26518 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:00 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27446 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:00 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27449 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 23 16:19:00 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1880805734' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 23 16:19:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:00.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 23 16:19:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827759102' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 23 16:19:01 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26539 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 23 16:19:01 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128640648' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 23 16:19:01 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26545 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:01 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27470 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:01 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17430 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27479 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17439 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:02 np0005532761 ovs-appctl[281446]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 16:19:02 np0005532761 ovs-appctl[281451]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 16:19:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 23 16:19:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714883536' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 23 16:19:02 np0005532761 ovs-appctl[281457]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 16:19:02 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26566 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:02.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:02.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 23 16:19:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423514191' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26578 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:19:03
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['images', 'backups', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root']
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:19:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:19:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27509 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17475 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27518 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:19:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17484 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 16:19:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752389414' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 16:19:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26596 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 23 16:19:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/577242787' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 23 16:19:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26605 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:19:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:04.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 23 16:19:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123989369' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:05 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27554 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:05 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17523 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:19:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2252325486' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:19:06 np0005532761 nova_compute[257263]: 2025-11-23 21:19:06.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:06 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26635 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 23 16:19:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979741711' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:06.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 23 16:19:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3481988654' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 23 16:19:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:07.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:07 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17562 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:07 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27596 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:19:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:19:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 23 16:19:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888662315' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 23 16:19:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 23 16:19:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827396603' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:08 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17610 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:08.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:19:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:08.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:19:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:08.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:19:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:08.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:08.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:08 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26677 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27656 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17634 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27674 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17646 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:10 np0005532761 nova_compute[257263]: 2025-11-23 21:19:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27686 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 23 16:19:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1688899299' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26695 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 23 16:19:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862591127' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 23 16:19:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:10.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:10.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17670 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26707 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27710 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17682 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26713 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27719 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:11 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:19:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:19:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164709178' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:19:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 23 16:19:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307253547' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 23 16:19:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:12 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17718 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:12 np0005532761 podman[283348]: 2025-11-23 21:19:12.674976075 +0000 UTC m=+0.081745147 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:19:12 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17721 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:12.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:12.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:12 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27752 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:12 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.17733 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26737 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:19:13 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27770 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 23 16:19:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206554228' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 16:19:13 np0005532761 virtqemud[256805]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 16:19:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 23 16:19:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650076849' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 23 16:19:14 np0005532761 nova_compute[257263]: 2025-11-23 21:19:14.031 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:14 np0005532761 nova_compute[257263]: 2025-11-23 21:19:14.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:14 np0005532761 systemd[1]: Starting Time & Date Service...
Nov 23 16:19:14 np0005532761 systemd[1]: Started Time & Date Service.
Nov 23 16:19:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:14 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26767 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:14.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:14 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.26779 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:19:15 np0005532761 nova_compute[257263]: 2025-11-23 21:19:15.036 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:15 np0005532761 nova_compute[257263]: 2025-11-23 21:19:15.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:19:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:16 np0005532761 nova_compute[257263]: 2025-11-23 21:19:16.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:16.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:16.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:17 np0005532761 nova_compute[257263]: 2025-11-23 21:19:17.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:17 np0005532761 nova_compute[257263]: 2025-11-23 21:19:17.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:19:17 np0005532761 nova_compute[257263]: 2025-11-23 21:19:17.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:19:17 np0005532761 nova_compute[257263]: 2025-11-23 21:19:17.046 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:19:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:17.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:19:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:19:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:19:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:19:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:18.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:19:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:18.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:19:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:18.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:18.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:20.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:20.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.063 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.063 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.063 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.064 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.064 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:19:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:19:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166569880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.511 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.682 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.683 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4646MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.683 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.684 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.738 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.738 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:19:21 np0005532761 nova_compute[257263]: 2025-11-23 21:19:21.764 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:19:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:19:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744182617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:19:22 np0005532761 nova_compute[257263]: 2025-11-23 21:19:22.188 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:19:22 np0005532761 nova_compute[257263]: 2025-11-23 21:19:22.193 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:19:22 np0005532761 nova_compute[257263]: 2025-11-23 21:19:22.205 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:19:22 np0005532761 nova_compute[257263]: 2025-11-23 21:19:22.206 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:19:22 np0005532761 nova_compute[257263]: 2025-11-23 21:19:22.206 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:19:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:22.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:22.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:24 np0005532761 nova_compute[257263]: 2025-11-23 21:19:24.207 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:24.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:24.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:26.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:26.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:27.215Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:27.216Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:27.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:19:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:19:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:28.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47785d0 =====
Nov 23 16:19:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:28.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47785d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47785d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:28.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:29 np0005532761 podman[283934]: 2025-11-23 21:19:29.569724067 +0000 UTC m=+0.057374642 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:19:29 np0005532761 podman[283933]: 2025-11-23 21:19:29.596873926 +0000 UTC m=+0.089801563 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:19:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:30.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:19:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:19:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:19:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:19:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:19:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:19:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:19:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:19:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:19:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:19:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:34.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:34.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-0[80079]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 23 16:19:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:36.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:37.217Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:37.217Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:37.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:19:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:19:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:19:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:38 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:19:38 np0005532761 podman[284164]: 2025-11-23 21:19:38.856483204 +0000 UTC m=+0.044811655 container create 383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:19:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:38.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:38 np0005532761 systemd[1]: Started libpod-conmon-383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664.scope.
Nov 23 16:19:38 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:19:38 np0005532761 podman[284164]: 2025-11-23 21:19:38.836098397 +0000 UTC m=+0.024426868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:19:38 np0005532761 podman[284164]: 2025-11-23 21:19:38.941350094 +0000 UTC m=+0.129678565 container init 383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:19:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:38 np0005532761 podman[284164]: 2025-11-23 21:19:38.950387936 +0000 UTC m=+0.138716397 container start 383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:19:38 np0005532761 podman[284164]: 2025-11-23 21:19:38.954063965 +0000 UTC m=+0.142392416 container attach 383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 16:19:38 np0005532761 amazing_newton[284180]: 167 167
Nov 23 16:19:38 np0005532761 systemd[1]: libpod-383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664.scope: Deactivated successfully.
Nov 23 16:19:38 np0005532761 podman[284164]: 2025-11-23 21:19:38.959394188 +0000 UTC m=+0.147722639 container died 383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 16:19:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:38.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-f245fe0ede8681a7d26ca2d5edf0c9474ec1b7afd129f63855dea0f376eb9610-merged.mount: Deactivated successfully.
Nov 23 16:19:39 np0005532761 podman[284164]: 2025-11-23 21:19:39.276418204 +0000 UTC m=+0.464746655 container remove 383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:19:39 np0005532761 systemd[1]: libpod-conmon-383b01da4523ec9c6aba21b7c9be22989f304305a733b868fd2d3b9836f4d664.scope: Deactivated successfully.
Nov 23 16:19:39 np0005532761 podman[284204]: 2025-11-23 21:19:39.458243648 +0000 UTC m=+0.065293105 container create aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 16:19:39 np0005532761 systemd[1]: Started libpod-conmon-aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df.scope.
Nov 23 16:19:39 np0005532761 podman[284204]: 2025-11-23 21:19:39.419692653 +0000 UTC m=+0.026742160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:19:39 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:19:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c7d1244792dfe72374603a6364b181aeb71b56ea9117ead06fd80884c0cf9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c7d1244792dfe72374603a6364b181aeb71b56ea9117ead06fd80884c0cf9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c7d1244792dfe72374603a6364b181aeb71b56ea9117ead06fd80884c0cf9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c7d1244792dfe72374603a6364b181aeb71b56ea9117ead06fd80884c0cf9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:39 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c7d1244792dfe72374603a6364b181aeb71b56ea9117ead06fd80884c0cf9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:39 np0005532761 podman[284204]: 2025-11-23 21:19:39.560105765 +0000 UTC m=+0.167155252 container init aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:19:39 np0005532761 podman[284204]: 2025-11-23 21:19:39.568597413 +0000 UTC m=+0.175646870 container start aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 16:19:39 np0005532761 podman[284204]: 2025-11-23 21:19:39.572352594 +0000 UTC m=+0.179402071 container attach aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 16:19:39 np0005532761 priceless_pasteur[284221]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:19:39 np0005532761 priceless_pasteur[284221]: --> All data devices are unavailable
Nov 23 16:19:39 np0005532761 systemd[1]: libpod-aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df.scope: Deactivated successfully.
Nov 23 16:19:39 np0005532761 podman[284204]: 2025-11-23 21:19:39.933673269 +0000 UTC m=+0.540722726 container died aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:19:39 np0005532761 systemd[1]: var-lib-containers-storage-overlay-28c7d1244792dfe72374603a6364b181aeb71b56ea9117ead06fd80884c0cf9a-merged.mount: Deactivated successfully.
Nov 23 16:19:39 np0005532761 podman[284204]: 2025-11-23 21:19:39.981575126 +0000 UTC m=+0.588624583 container remove aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:19:39 np0005532761 systemd[1]: libpod-conmon-aff545071a7ace92b9dcc03c1ddc8fa3d48c360b416f6dfcf9003cbe233634df.scope: Deactivated successfully.
Nov 23 16:19:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:40 np0005532761 podman[284342]: 2025-11-23 21:19:40.596601336 +0000 UTC m=+0.037287012 container create 42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:19:40 np0005532761 systemd[1]: Started libpod-conmon-42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8.scope.
Nov 23 16:19:40 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:19:40 np0005532761 podman[284342]: 2025-11-23 21:19:40.670850761 +0000 UTC m=+0.111536517 container init 42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 16:19:40 np0005532761 podman[284342]: 2025-11-23 21:19:40.677487789 +0000 UTC m=+0.118173455 container start 42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:19:40 np0005532761 podman[284342]: 2025-11-23 21:19:40.581716947 +0000 UTC m=+0.022402643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:19:40 np0005532761 podman[284342]: 2025-11-23 21:19:40.681075745 +0000 UTC m=+0.121761511 container attach 42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:19:40 np0005532761 frosty_poitras[284360]: 167 167
Nov 23 16:19:40 np0005532761 systemd[1]: libpod-42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8.scope: Deactivated successfully.
Nov 23 16:19:40 np0005532761 podman[284342]: 2025-11-23 21:19:40.68347785 +0000 UTC m=+0.124163526 container died 42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:19:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:40 np0005532761 systemd[1]: var-lib-containers-storage-overlay-68156efefd523e968035b275fcf0245982ad1bc145d740a44744c380438d9d51-merged.mount: Deactivated successfully.
Nov 23 16:19:40 np0005532761 podman[284342]: 2025-11-23 21:19:40.740315527 +0000 UTC m=+0.181001203 container remove 42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 16:19:40 np0005532761 systemd[1]: libpod-conmon-42b20bdaf67d5c9b1699f168a97fe2866966b9c9752dc69e1e198779a95aebf8.scope: Deactivated successfully.
Nov 23 16:19:40 np0005532761 podman[284385]: 2025-11-23 21:19:40.914957708 +0000 UTC m=+0.048405641 container create 49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:19:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:40 np0005532761 systemd[1]: Started libpod-conmon-49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966.scope.
Nov 23 16:19:40 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:19:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ad1dc446ee4819dec54229d12901af10bdf5d93df91f173938a99d4207dada/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:40 np0005532761 podman[284385]: 2025-11-23 21:19:40.893860951 +0000 UTC m=+0.027308894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:19:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:40.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ad1dc446ee4819dec54229d12901af10bdf5d93df91f173938a99d4207dada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ad1dc446ee4819dec54229d12901af10bdf5d93df91f173938a99d4207dada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:40 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ad1dc446ee4819dec54229d12901af10bdf5d93df91f173938a99d4207dada/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:41 np0005532761 podman[284385]: 2025-11-23 21:19:41.00622062 +0000 UTC m=+0.139668633 container init 49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 23 16:19:41 np0005532761 podman[284385]: 2025-11-23 21:19:41.01516756 +0000 UTC m=+0.148615523 container start 49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:19:41 np0005532761 podman[284385]: 2025-11-23 21:19:41.019342292 +0000 UTC m=+0.152790245 container attach 49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]: {
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:    "1": [
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:        {
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "devices": [
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "/dev/loop3"
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            ],
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "lv_name": "ceph_lv0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "lv_size": "21470642176",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "name": "ceph_lv0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "tags": {
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.cluster_name": "ceph",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.crush_device_class": "",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.encrypted": "0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.osd_id": "1",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.type": "block",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.vdo": "0",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:                "ceph.with_tpm": "0"
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            },
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "type": "block",
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:            "vg_name": "ceph_vg0"
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:        }
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]:    ]
Nov 23 16:19:41 np0005532761 admiring_khayyam[284402]: }
Nov 23 16:19:41 np0005532761 systemd[1]: libpod-49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966.scope: Deactivated successfully.
Nov 23 16:19:41 np0005532761 podman[284385]: 2025-11-23 21:19:41.305060157 +0000 UTC m=+0.438508090 container died 49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:19:41 np0005532761 systemd[1]: var-lib-containers-storage-overlay-52ad1dc446ee4819dec54229d12901af10bdf5d93df91f173938a99d4207dada-merged.mount: Deactivated successfully.
Nov 23 16:19:41 np0005532761 podman[284385]: 2025-11-23 21:19:41.510051244 +0000 UTC m=+0.643499167 container remove 49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khayyam, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 16:19:41 np0005532761 systemd[1]: libpod-conmon-49c19c47d4256a999ae0701bf5b3088fe0035701894e93fcab86a05207aef966.scope: Deactivated successfully.
Nov 23 16:19:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:42 np0005532761 podman[284516]: 2025-11-23 21:19:42.051515888 +0000 UTC m=+0.042576525 container create 4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_vaughan, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 16:19:42 np0005532761 systemd[1]: Started libpod-conmon-4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715.scope.
Nov 23 16:19:42 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:19:42 np0005532761 podman[284516]: 2025-11-23 21:19:42.030584545 +0000 UTC m=+0.021645222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:19:42 np0005532761 podman[284516]: 2025-11-23 21:19:42.140333284 +0000 UTC m=+0.131393961 container init 4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_vaughan, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:19:42 np0005532761 podman[284516]: 2025-11-23 21:19:42.145971525 +0000 UTC m=+0.137032172 container start 4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_vaughan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:19:42 np0005532761 nervous_vaughan[284533]: 167 167
Nov 23 16:19:42 np0005532761 systemd[1]: libpod-4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715.scope: Deactivated successfully.
Nov 23 16:19:42 np0005532761 podman[284516]: 2025-11-23 21:19:42.154935296 +0000 UTC m=+0.145995993 container attach 4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:19:42 np0005532761 podman[284516]: 2025-11-23 21:19:42.155368557 +0000 UTC m=+0.146429214 container died 4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Nov 23 16:19:42 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a6ad428550c190c185557834fa287bdb1e541843821fb32acdd941065c62bf70-merged.mount: Deactivated successfully.
Nov 23 16:19:42 np0005532761 podman[284516]: 2025-11-23 21:19:42.220268871 +0000 UTC m=+0.211329528 container remove 4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_vaughan, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 23 16:19:42 np0005532761 systemd[1]: libpod-conmon-4152c4a094def199b88b49978b279a8d39d68c491812bf52abd942ab8d9b1715.scope: Deactivated successfully.
Nov 23 16:19:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:19:42 np0005532761 podman[284557]: 2025-11-23 21:19:42.393915435 +0000 UTC m=+0.060403243 container create e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:19:42 np0005532761 systemd[1]: Started libpod-conmon-e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b.scope.
Nov 23 16:19:42 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:19:42 np0005532761 podman[284557]: 2025-11-23 21:19:42.37249523 +0000 UTC m=+0.038983068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:19:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30518a58557676aaf1d3489e362f34bad932e0ed279310d3ba6f3d749fc3824c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30518a58557676aaf1d3489e362f34bad932e0ed279310d3ba6f3d749fc3824c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30518a58557676aaf1d3489e362f34bad932e0ed279310d3ba6f3d749fc3824c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:42 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30518a58557676aaf1d3489e362f34bad932e0ed279310d3ba6f3d749fc3824c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:19:42 np0005532761 podman[284557]: 2025-11-23 21:19:42.493561322 +0000 UTC m=+0.160049130 container init e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:19:42 np0005532761 podman[284557]: 2025-11-23 21:19:42.501007652 +0000 UTC m=+0.167495460 container start e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Nov 23 16:19:42 np0005532761 podman[284557]: 2025-11-23 21:19:42.511425612 +0000 UTC m=+0.177913410 container attach e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:19:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:42.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:42.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:43 np0005532761 lvm[284656]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:19:43 np0005532761 lvm[284656]: VG ceph_vg0 finished
Nov 23 16:19:43 np0005532761 crazy_blackwell[284574]: {}
Nov 23 16:19:43 np0005532761 lvm[284671]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:19:43 np0005532761 lvm[284671]: VG ceph_vg0 finished
Nov 23 16:19:43 np0005532761 podman[284649]: 2025-11-23 21:19:43.164555026 +0000 UTC m=+0.066141788 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:19:43 np0005532761 systemd[1]: libpod-e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b.scope: Deactivated successfully.
Nov 23 16:19:43 np0005532761 systemd[1]: libpod-e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b.scope: Consumed 1.128s CPU time.
Nov 23 16:19:43 np0005532761 conmon[284574]: conmon e94f87cf1a2eef35ace7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b.scope/container/memory.events
Nov 23 16:19:43 np0005532761 podman[284557]: 2025-11-23 21:19:43.190997796 +0000 UTC m=+0.857485604 container died e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 16:19:43 np0005532761 systemd[1]: var-lib-containers-storage-overlay-30518a58557676aaf1d3489e362f34bad932e0ed279310d3ba6f3d749fc3824c-merged.mount: Deactivated successfully.
Nov 23 16:19:43 np0005532761 podman[284557]: 2025-11-23 21:19:43.28645963 +0000 UTC m=+0.952947438 container remove e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:19:43 np0005532761 systemd[1]: libpod-conmon-e94f87cf1a2eef35ace70d568fb33c32d100b478fa2427811d206906fcbed66b.scope: Deactivated successfully.
Nov 23 16:19:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:19:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:43 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:19:43 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:43 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:43 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:44 np0005532761 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 16:19:44 np0005532761 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 16:19:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:44.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:44.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:19:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:47.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:19:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:19:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:19:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:19:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:19:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:48.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:19:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:48.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:19:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:48.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:48.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:48.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:50.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:50.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:19:51.877 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:19:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:19:51.878 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:19:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:19:51.878 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:19:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:52.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:19:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:54.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:55.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:56.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:19:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:19:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:19:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:19:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:19:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:57.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:57] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:19:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:19:57] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:19:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:19:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:19:58.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:19:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:58.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:19:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:59.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 23 16:20:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 23 16:20:00 np0005532761 ceph-mon[74569]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1 is in error state
Nov 23 16:20:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:00 np0005532761 podman[284759]: 2025-11-23 21:20:00.464532432 +0000 UTC m=+0.054078714 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:20:00 np0005532761 podman[284758]: 2025-11-23 21:20:00.49048736 +0000 UTC m=+0.083832463 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 16:20:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:00 np0005532761 ceph-mon[74569]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 23 16:20:00 np0005532761 ceph-mon[74569]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 23 16:20:00 np0005532761 ceph-mon[74569]:    daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1 is in error state
Nov 23 16:20:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:00.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:01.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:02 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:02.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:03.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:20:03
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'images', '.nfs']
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:20:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:20:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:20:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:20:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:04.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:05.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:06 np0005532761 nova_compute[257263]: 2025-11-23 21:20:06.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:06.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:07.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:07.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:07 np0005532761 systemd[1]: session-56.scope: Deactivated successfully.
Nov 23 16:20:07 np0005532761 systemd[1]: session-56.scope: Consumed 2min 53.630s CPU time, 736.3M memory peak, read 249.5M from disk, written 80.4M to disk.
Nov 23 16:20:07 np0005532761 systemd-logind[820]: Session 56 logged out. Waiting for processes to exit.
Nov 23 16:20:07 np0005532761 systemd-logind[820]: Removed session 56.
Nov 23 16:20:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:20:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:20:07 np0005532761 systemd-logind[820]: New session 57 of user zuul.
Nov 23 16:20:07 np0005532761 systemd[1]: Started Session 57 of User zuul.
Nov 23 16:20:07 np0005532761 systemd[1]: session-57.scope: Deactivated successfully.
Nov 23 16:20:07 np0005532761 systemd-logind[820]: Session 57 logged out. Waiting for processes to exit.
Nov 23 16:20:07 np0005532761 systemd-logind[820]: Removed session 57.
Nov 23 16:20:08 np0005532761 systemd-logind[820]: New session 58 of user zuul.
Nov 23 16:20:08 np0005532761 systemd[1]: Started Session 58 of User zuul.
Nov 23 16:20:08 np0005532761 systemd[1]: session-58.scope: Deactivated successfully.
Nov 23 16:20:08 np0005532761 systemd-logind[820]: Session 58 logged out. Waiting for processes to exit.
Nov 23 16:20:08 np0005532761 systemd-logind[820]: Removed session 58.
Nov 23 16:20:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:08.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:09.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:10.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:11.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:12 np0005532761 nova_compute[257263]: 2025-11-23 21:20:12.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:20:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:12.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:20:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:13.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:13 np0005532761 podman[284900]: 2025-11-23 21:20:13.53960536 +0000 UTC m=+0.061518854 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 16:20:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:20:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:14.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:20:15 np0005532761 nova_compute[257263]: 2025-11-23 21:20:15.029 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:15.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:16 np0005532761 nova_compute[257263]: 2025-11-23 21:20:16.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:16.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:17 np0005532761 nova_compute[257263]: 2025-11-23 21:20:17.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:17 np0005532761 nova_compute[257263]: 2025-11-23 21:20:17.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:17 np0005532761 nova_compute[257263]: 2025-11-23 21:20:17.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:20:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:17.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:17.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:20:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:20:18 np0005532761 nova_compute[257263]: 2025-11-23 21:20:18.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:18 np0005532761 nova_compute[257263]: 2025-11-23 21:20:18.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:20:18 np0005532761 nova_compute[257263]: 2025-11-23 21:20:18.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:20:18 np0005532761 nova_compute[257263]: 2025-11-23 21:20:18.047 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:20:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:20:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:20:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:18.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:20:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:18.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:20:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:19.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:20.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:21.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.058 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.059 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.059 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.059 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.059 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:20:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:20:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794938743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.505 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.661 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.662 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4818MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.663 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.663 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.758 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.759 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:20:21 np0005532761 nova_compute[257263]: 2025-11-23 21:20:21.775 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:20:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:20:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626062665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:20:22 np0005532761 nova_compute[257263]: 2025-11-23 21:20:22.226 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:20:22 np0005532761 nova_compute[257263]: 2025-11-23 21:20:22.231 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:20:22 np0005532761 nova_compute[257263]: 2025-11-23 21:20:22.247 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:20:22 np0005532761 nova_compute[257263]: 2025-11-23 21:20:22.248 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:20:22 np0005532761 nova_compute[257263]: 2025-11-23 21:20:22.248 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:20:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:24 np0005532761 nova_compute[257263]: 2025-11-23 21:20:24.248 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:24.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:26.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:27.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:20:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:20:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:28.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:29.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:30 np0005532761 nova_compute[257263]: 2025-11-23 21:20:30.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:31 np0005532761 podman[285011]: 2025-11-23 21:20:31.535540232 +0000 UTC m=+0.054580437 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 23 16:20:31 np0005532761 podman[285010]: 2025-11-23 21:20:31.588674389 +0000 UTC m=+0.109614885 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 23 16:20:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:20:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:20:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:20:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:20:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:20:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:20:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:20:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:20:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:35.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:20:35 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.4 total, 600.0 interval#012Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3290 syncs, 3.54 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2135 writes, 6899 keys, 2135 commit groups, 1.0 writes per commit group, ingest: 8.10 MB, 0.01 MB/s#012Interval WAL: 2135 writes, 914 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:20:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:37.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:37.496Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:20:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:20:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:38.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:39.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:40 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:41.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:20:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:43.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:20:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:43 np0005532761 podman[285087]: 2025-11-23 21:20:43.808489113 +0000 UTC m=+0.054143025 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:20:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:20:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:20:44 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:20:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:45.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:20:45 np0005532761 podman[285323]: 2025-11-23 21:20:45.50149185 +0000 UTC m=+0.043091159 container create c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_colden, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:20:45 np0005532761 systemd[1]: Started libpod-conmon-c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642.scope.
Nov 23 16:20:45 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:20:45 np0005532761 podman[285323]: 2025-11-23 21:20:45.481926645 +0000 UTC m=+0.023525974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:20:45 np0005532761 podman[285323]: 2025-11-23 21:20:45.579402823 +0000 UTC m=+0.121002152 container init c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_colden, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:20:45 np0005532761 podman[285323]: 2025-11-23 21:20:45.585893757 +0000 UTC m=+0.127493086 container start c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 23 16:20:45 np0005532761 podman[285323]: 2025-11-23 21:20:45.589632928 +0000 UTC m=+0.131232247 container attach c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_colden, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:20:45 np0005532761 tender_colden[285339]: 167 167
Nov 23 16:20:45 np0005532761 systemd[1]: libpod-c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642.scope: Deactivated successfully.
Nov 23 16:20:45 np0005532761 conmon[285339]: conmon c6eab671ee5bcb74b190 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642.scope/container/memory.events
Nov 23 16:20:45 np0005532761 podman[285323]: 2025-11-23 21:20:45.594060637 +0000 UTC m=+0.135660036 container died c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_colden, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 16:20:45 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1a59bff459e81a8b4306f3bdc7a607492721d8fa3a42a13a063239d31f22d783-merged.mount: Deactivated successfully.
Nov 23 16:20:45 np0005532761 podman[285323]: 2025-11-23 21:20:45.641324566 +0000 UTC m=+0.182923915 container remove c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_colden, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:20:45 np0005532761 systemd[1]: libpod-conmon-c6eab671ee5bcb74b19021f2a2c6b0c421f3ee6c9655d83ba27272df4e9f6642.scope: Deactivated successfully.
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.749429) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845749511, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2422, "num_deletes": 508, "total_data_size": 3579413, "memory_usage": 3636792, "flush_reason": "Manual Compaction"}
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845782370, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3492192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31604, "largest_seqno": 34025, "table_properties": {"data_size": 3481003, "index_size": 6403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 30131, "raw_average_key_size": 20, "raw_value_size": 3455012, "raw_average_value_size": 2350, "num_data_blocks": 274, "num_entries": 1470, "num_filter_entries": 1470, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932684, "oldest_key_time": 1763932684, "file_creation_time": 1763932845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 32998 microseconds, and 14101 cpu microseconds.
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.782431) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3492192 bytes OK
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.782460) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.784755) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.784790) EVENT_LOG_v1 {"time_micros": 1763932845784782, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.784838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3567327, prev total WAL file size 3567327, number of live WAL files 2.
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.787934) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353033' seq:0, type:0; will stop at (end)
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3410KB)], [68(13MB)]
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845788021, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 18098148, "oldest_snapshot_seqno": -1}
Nov 23 16:20:45 np0005532761 podman[285363]: 2025-11-23 21:20:45.829239214 +0000 UTC m=+0.044756993 container create f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:20:45 np0005532761 podman[285363]: 2025-11-23 21:20:45.812747401 +0000 UTC m=+0.028265200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6668 keys, 16619760 bytes, temperature: kUnknown
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845950064, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 16619760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16573329, "index_size": 28655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 173927, "raw_average_key_size": 26, "raw_value_size": 16451550, "raw_average_value_size": 2467, "num_data_blocks": 1141, "num_entries": 6668, "num_filter_entries": 6668, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.950340) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16619760 bytes
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.952142) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.6 rd, 102.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 13.9 +0.0 blob) out(15.8 +0.0 blob), read-write-amplify(9.9) write-amplify(4.8) OK, records in: 7701, records dropped: 1033 output_compression: NoCompression
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.952169) EVENT_LOG_v1 {"time_micros": 1763932845952158, "job": 38, "event": "compaction_finished", "compaction_time_micros": 162111, "compaction_time_cpu_micros": 53627, "output_level": 6, "num_output_files": 1, "total_output_size": 16619760, "num_input_records": 7701, "num_output_records": 6668, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845953369, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845957885, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.786905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.957946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.957951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.957954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.957956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:20:45.957959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532761 systemd[1]: Started libpod-conmon-f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52.scope.
Nov 23 16:20:46 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:20:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e3a6ea150eb0094384794060eb64e20e3f871bd88d1a9ed135b570d5aa3b17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e3a6ea150eb0094384794060eb64e20e3f871bd88d1a9ed135b570d5aa3b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e3a6ea150eb0094384794060eb64e20e3f871bd88d1a9ed135b570d5aa3b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e3a6ea150eb0094384794060eb64e20e3f871bd88d1a9ed135b570d5aa3b17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:46 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e3a6ea150eb0094384794060eb64e20e3f871bd88d1a9ed135b570d5aa3b17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:46 np0005532761 podman[285363]: 2025-11-23 21:20:46.040231302 +0000 UTC m=+0.255749181 container init f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 16:20:46 np0005532761 podman[285363]: 2025-11-23 21:20:46.058942454 +0000 UTC m=+0.274460243 container start f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_aryabhata, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 16:20:46 np0005532761 podman[285363]: 2025-11-23 21:20:46.062690475 +0000 UTC m=+0.278208264 container attach f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 16:20:46 np0005532761 trusting_aryabhata[285381]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:20:46 np0005532761 trusting_aryabhata[285381]: --> All data devices are unavailable
Nov 23 16:20:46 np0005532761 systemd[1]: libpod-f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52.scope: Deactivated successfully.
Nov 23 16:20:46 np0005532761 podman[285363]: 2025-11-23 21:20:46.382268729 +0000 UTC m=+0.597786508 container died f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 16:20:46 np0005532761 systemd[1]: var-lib-containers-storage-overlay-23e3a6ea150eb0094384794060eb64e20e3f871bd88d1a9ed135b570d5aa3b17-merged.mount: Deactivated successfully.
Nov 23 16:20:46 np0005532761 podman[285363]: 2025-11-23 21:20:46.422782547 +0000 UTC m=+0.638300326 container remove f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 16:20:46 np0005532761 systemd[1]: libpod-conmon-f84184bf84b1ffeac29f1b7be886f4a4226a1e2eed60e866584b3f4d49491d52.scope: Deactivated successfully.
Nov 23 16:20:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Nov 23 16:20:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:47.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:47 np0005532761 podman[285498]: 2025-11-23 21:20:47.039841733 +0000 UTC m=+0.053018725 container create dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:20:47 np0005532761 systemd[1]: Started libpod-conmon-dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6.scope.
Nov 23 16:20:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:47 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:20:47 np0005532761 podman[285498]: 2025-11-23 21:20:47.014898573 +0000 UTC m=+0.028075675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:20:47 np0005532761 podman[285498]: 2025-11-23 21:20:47.110317156 +0000 UTC m=+0.123494168 container init dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Nov 23 16:20:47 np0005532761 podman[285498]: 2025-11-23 21:20:47.116874592 +0000 UTC m=+0.130051584 container start dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 16:20:47 np0005532761 podman[285498]: 2025-11-23 21:20:47.119716488 +0000 UTC m=+0.132893480 container attach dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_cerf, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:20:47 np0005532761 ecstatic_cerf[285516]: 167 167
Nov 23 16:20:47 np0005532761 systemd[1]: libpod-dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6.scope: Deactivated successfully.
Nov 23 16:20:47 np0005532761 podman[285498]: 2025-11-23 21:20:47.121722702 +0000 UTC m=+0.134899684 container died dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_cerf, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 16:20:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c8ab6d645807e0fcd68e8af3d58d5c15424fdcce667d4bb9230fd4d7c8bf9aef-merged.mount: Deactivated successfully.
Nov 23 16:20:47 np0005532761 podman[285498]: 2025-11-23 21:20:47.149664952 +0000 UTC m=+0.162841944 container remove dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:20:47 np0005532761 systemd[1]: libpod-conmon-dc971540b000a1fcd2de0cc4c7d139ebc619fcd33d9af6d8245c6573600057e6.scope: Deactivated successfully.
Nov 23 16:20:47 np0005532761 podman[285540]: 2025-11-23 21:20:47.298318415 +0000 UTC m=+0.040004015 container create e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:20:47 np0005532761 systemd[1]: Started libpod-conmon-e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9.scope.
Nov 23 16:20:47 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:20:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2a68a05c12d74e60e429947621df5edd23b3b4415ed43163c5f3656cd16f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2a68a05c12d74e60e429947621df5edd23b3b4415ed43163c5f3656cd16f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2a68a05c12d74e60e429947621df5edd23b3b4415ed43163c5f3656cd16f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:47 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2a68a05c12d74e60e429947621df5edd23b3b4415ed43163c5f3656cd16f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:47 np0005532761 podman[285540]: 2025-11-23 21:20:47.280828646 +0000 UTC m=+0.022514276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:20:47 np0005532761 podman[285540]: 2025-11-23 21:20:47.383888414 +0000 UTC m=+0.125574034 container init e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 16:20:47 np0005532761 podman[285540]: 2025-11-23 21:20:47.391634503 +0000 UTC m=+0.133320103 container start e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 16:20:47 np0005532761 podman[285540]: 2025-11-23 21:20:47.395119966 +0000 UTC m=+0.136805606 container attach e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_nash, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 16:20:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:47.498Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]: {
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:    "1": [
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:        {
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "devices": [
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "/dev/loop3"
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            ],
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "lv_name": "ceph_lv0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "lv_size": "21470642176",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "name": "ceph_lv0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "tags": {
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.cluster_name": "ceph",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.crush_device_class": "",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.encrypted": "0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.osd_id": "1",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.type": "block",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.vdo": "0",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:                "ceph.with_tpm": "0"
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            },
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "type": "block",
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:            "vg_name": "ceph_vg0"
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:        }
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]:    ]
Nov 23 16:20:47 np0005532761 optimistic_nash[285557]: }
Nov 23 16:20:47 np0005532761 systemd[1]: libpod-e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9.scope: Deactivated successfully.
Nov 23 16:20:47 np0005532761 podman[285540]: 2025-11-23 21:20:47.699549164 +0000 UTC m=+0.441234804 container died e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_nash, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 16:20:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:20:47 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ad2a68a05c12d74e60e429947621df5edd23b3b4415ed43163c5f3656cd16f33-merged.mount: Deactivated successfully.
Nov 23 16:20:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:20:47 np0005532761 podman[285540]: 2025-11-23 21:20:47.750147593 +0000 UTC m=+0.491833193 container remove e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:20:47 np0005532761 systemd[1]: libpod-conmon-e9f67d8acda1d91f765093d6671e8073eeb31cda3f74ab1a399067afa48d62c9.scope: Deactivated successfully.
Nov 23 16:20:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:20:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:20:48 np0005532761 podman[285694]: 2025-11-23 21:20:48.412429802 +0000 UTC m=+0.063388723 container create de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:20:48 np0005532761 systemd[1]: Started libpod-conmon-de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807.scope.
Nov 23 16:20:48 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:20:48 np0005532761 podman[285694]: 2025-11-23 21:20:48.371167295 +0000 UTC m=+0.022126236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:20:48 np0005532761 podman[285694]: 2025-11-23 21:20:48.483873232 +0000 UTC m=+0.134832173 container init de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 16:20:48 np0005532761 podman[285694]: 2025-11-23 21:20:48.491729853 +0000 UTC m=+0.142688774 container start de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:20:48 np0005532761 heuristic_elion[285710]: 167 167
Nov 23 16:20:48 np0005532761 podman[285694]: 2025-11-23 21:20:48.49610182 +0000 UTC m=+0.147060771 container attach de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 16:20:48 np0005532761 systemd[1]: libpod-de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807.scope: Deactivated successfully.
Nov 23 16:20:48 np0005532761 podman[285694]: 2025-11-23 21:20:48.497380115 +0000 UTC m=+0.148339106 container died de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:20:48 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2ea53ef4e99b0891d2295d848d84cb716e00a5d4ff3646d9e0ce34f6af2818c8-merged.mount: Deactivated successfully.
Nov 23 16:20:48 np0005532761 podman[285694]: 2025-11-23 21:20:48.545642881 +0000 UTC m=+0.196601812 container remove de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_elion, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:20:48 np0005532761 systemd[1]: libpod-conmon-de3d6461b79b624f62e4afed72d7729c5275ae20d081aabe4d66cbcca3a72807.scope: Deactivated successfully.
Nov 23 16:20:48 np0005532761 podman[285736]: 2025-11-23 21:20:48.722014358 +0000 UTC m=+0.053727904 container create be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:20:48 np0005532761 systemd[1]: Started libpod-conmon-be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979.scope.
Nov 23 16:20:48 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:20:48 np0005532761 podman[285736]: 2025-11-23 21:20:48.69228645 +0000 UTC m=+0.024000016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:20:48 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a958b3ff8b1ef0806f5dc2960a5a537f5e7d213c770d91ae26f384f49a44d6a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:48 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a958b3ff8b1ef0806f5dc2960a5a537f5e7d213c770d91ae26f384f49a44d6a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:48 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a958b3ff8b1ef0806f5dc2960a5a537f5e7d213c770d91ae26f384f49a44d6a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:48 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a958b3ff8b1ef0806f5dc2960a5a537f5e7d213c770d91ae26f384f49a44d6a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:20:48 np0005532761 podman[285736]: 2025-11-23 21:20:48.804556816 +0000 UTC m=+0.136270382 container init be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 23 16:20:48 np0005532761 podman[285736]: 2025-11-23 21:20:48.810608758 +0000 UTC m=+0.142322304 container start be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:20:48 np0005532761 podman[285736]: 2025-11-23 21:20:48.81440809 +0000 UTC m=+0.146121636 container attach be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:20:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:48.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:20:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:48.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:20:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:48.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 715 B/s rd, 0 op/s
Nov 23 16:20:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:49.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:49 np0005532761 lvm[285828]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:20:49 np0005532761 lvm[285828]: VG ceph_vg0 finished
Nov 23 16:20:49 np0005532761 trusting_mclaren[285753]: {}
Nov 23 16:20:49 np0005532761 systemd[1]: libpod-be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979.scope: Deactivated successfully.
Nov 23 16:20:49 np0005532761 systemd[1]: libpod-be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979.scope: Consumed 1.033s CPU time.
Nov 23 16:20:49 np0005532761 podman[285736]: 2025-11-23 21:20:49.495899417 +0000 UTC m=+0.827612963 container died be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 16:20:49 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a958b3ff8b1ef0806f5dc2960a5a537f5e7d213c770d91ae26f384f49a44d6a7-merged.mount: Deactivated successfully.
Nov 23 16:20:49 np0005532761 podman[285736]: 2025-11-23 21:20:49.536860937 +0000 UTC m=+0.868574473 container remove be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:20:49 np0005532761 systemd[1]: libpod-conmon-be46e9133f84b5133898c05413e5caf361ce830eb0b53cd051835a10d7804979.scope: Deactivated successfully.
Nov 23 16:20:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:20:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:49 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:20:49 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:50 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:50 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Nov 23 16:20:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:51.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:20:51.878 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:20:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:20:51.879 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:20:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:20:51.879 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:20:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 715 B/s rd, 0 op/s
Nov 23 16:20:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:53.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:20:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:20:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:20:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:20:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:20:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:55.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:57.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:57.499Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:20:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:57.499Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:20:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:57.499Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:20:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:20:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:20:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:20:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:20:58.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:20:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:20:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:20:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:20:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:01.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:02 np0005532761 podman[285883]: 2025-11-23 21:21:02.567685414 +0000 UTC m=+0.063802965 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 16:21:02 np0005532761 podman[285882]: 2025-11-23 21:21:02.592693216 +0000 UTC m=+0.096456402 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:21:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:03.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:21:03
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.nfs', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes']
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:21:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:21:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:21:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:21:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:05.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:07 np0005532761 nova_compute[257263]: 2025-11-23 21:21:07.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:07.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:07.500Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:21:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:21:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:21:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1693615462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:21:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:21:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1693615462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:21:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:08.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:09.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 16:21:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:11.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:12 np0005532761 nova_compute[257263]: 2025-11-23 21:21:12.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:21:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:21:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:21:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:14 np0005532761 podman[285966]: 2025-11-23 21:21:14.565205548 +0000 UTC m=+0.077551424 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 16:21:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 16:21:15 np0005532761 nova_compute[257263]: 2025-11-23 21:21:15.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:15.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:16 np0005532761 nova_compute[257263]: 2025-11-23 21:21:16.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:21:17 np0005532761 nova_compute[257263]: 2025-11-23 21:21:17.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:17.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:17.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:17.501Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:21:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:21:18 np0005532761 nova_compute[257263]: 2025-11-23 21:21:18.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:18 np0005532761 nova_compute[257263]: 2025-11-23 21:21:18.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:21:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:21:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:21:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:21:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:18.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:21:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:18 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:19.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.858275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879858367, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 541, "num_deletes": 251, "total_data_size": 690852, "memory_usage": 700688, "flush_reason": "Manual Compaction"}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879868002, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 682582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34026, "largest_seqno": 34566, "table_properties": {"data_size": 679529, "index_size": 1025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7144, "raw_average_key_size": 19, "raw_value_size": 673450, "raw_average_value_size": 1815, "num_data_blocks": 44, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932846, "oldest_key_time": 1763932846, "file_creation_time": 1763932879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9754 microseconds, and 4130 cpu microseconds.
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.868041) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 682582 bytes OK
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.868059) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869388) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869399) EVENT_LOG_v1 {"time_micros": 1763932879869396, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 687810, prev total WAL file size 687810, number of live WAL files 2.
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(666KB)], [71(15MB)]
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879869880, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17302342, "oldest_snapshot_seqno": -1}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6525 keys, 15181255 bytes, temperature: kUnknown
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879958434, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 15181255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15136948, "index_size": 26917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171660, "raw_average_key_size": 26, "raw_value_size": 15018680, "raw_average_value_size": 2301, "num_data_blocks": 1063, "num_entries": 6525, "num_filter_entries": 6525, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.958650) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15181255 bytes
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.959918) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.2 rd, 171.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 15.8 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(47.6) write-amplify(22.2) OK, records in: 7039, records dropped: 514 output_compression: NoCompression
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.959940) EVENT_LOG_v1 {"time_micros": 1763932879959931, "job": 40, "event": "compaction_finished", "compaction_time_micros": 88620, "compaction_time_cpu_micros": 27906, "output_level": 6, "num_output_files": 1, "total_output_size": 15181255, "num_input_records": 7039, "num_output_records": 6525, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879960259, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879964355, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:20 np0005532761 nova_compute[257263]: 2025-11-23 21:21:20.036 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:20 np0005532761 nova_compute[257263]: 2025-11-23 21:21:20.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:21:20 np0005532761 nova_compute[257263]: 2025-11-23 21:21:20.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:21:20 np0005532761 nova_compute[257263]: 2025-11-23 21:21:20.053 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:21:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:20 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:21.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.060 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.060 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.060 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.061 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.061 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:21:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:21:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:21.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:21:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:21:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581265608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.526 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.660 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.661 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.661 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.662 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.747 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.748 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:21:21 np0005532761 nova_compute[257263]: 2025-11-23 21:21:21.769 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:21:22 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:21:22 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57458500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:21:22 np0005532761 nova_compute[257263]: 2025-11-23 21:21:22.213 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:21:22 np0005532761 nova_compute[257263]: 2025-11-23 21:21:22.218 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:21:22 np0005532761 nova_compute[257263]: 2025-11-23 21:21:22.235 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:21:22 np0005532761 nova_compute[257263]: 2025-11-23 21:21:22.236 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:21:22 np0005532761 nova_compute[257263]: 2025-11-23 21:21:22.236 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:21:22 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:23.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:23.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:24 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:25.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:26 np0005532761 nova_compute[257263]: 2025-11-23 21:21:26.237 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:26 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:27.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:27.502Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:21:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 23 16:21:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:28.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:21:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:28.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:21:28 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:29.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:31.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:31.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:33.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:21:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:21:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:21:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:21:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:21:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:21:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:21:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:21:33 np0005532761 podman[286077]: 2025-11-23 21:21:33.53388498 +0000 UTC m=+0.046128150 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:21:33 np0005532761 podman[286076]: 2025-11-23 21:21:33.607793775 +0000 UTC m=+0.126416426 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 23 16:21:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:35.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:21:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:37.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:37.503Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:21:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:21:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:38.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1005 B/s rd, 0 op/s
Nov 23 16:21:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:39.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:41.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1004 B/s rd, 0 op/s
Nov 23 16:21:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:45.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:45 np0005532761 podman[286135]: 2025-11-23 21:21:45.559507235 +0000 UTC m=+0.075654213 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:21:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1004 B/s rd, 0 op/s
Nov 23 16:21:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:47.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:47.504Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:21:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:21:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:21:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:21:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:48.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:49.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1004 B/s rd, 0 op/s
Nov 23 16:21:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:49.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:21:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:21:50 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:51.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:21:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:21:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:21:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:21:51.880 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:21:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:21:51.880 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:21:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:21:51.880 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:21:52 np0005532761 podman[286358]: 2025-11-23 21:21:52.290687977 +0000 UTC m=+0.038228819 container create c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mendeleev, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:21:52 np0005532761 systemd[1]: Started libpod-conmon-c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a.scope.
Nov 23 16:21:52 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:21:52 np0005532761 podman[286358]: 2025-11-23 21:21:52.354382057 +0000 UTC m=+0.101922899 container init c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:21:52 np0005532761 podman[286358]: 2025-11-23 21:21:52.364289744 +0000 UTC m=+0.111830586 container start c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:21:52 np0005532761 podman[286358]: 2025-11-23 21:21:52.367252533 +0000 UTC m=+0.114793375 container attach c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 16:21:52 np0005532761 beautiful_mendeleev[286374]: 167 167
Nov 23 16:21:52 np0005532761 systemd[1]: libpod-c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a.scope: Deactivated successfully.
Nov 23 16:21:52 np0005532761 podman[286358]: 2025-11-23 21:21:52.368199539 +0000 UTC m=+0.115740381 container died c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:21:52 np0005532761 podman[286358]: 2025-11-23 21:21:52.27331277 +0000 UTC m=+0.020853632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:21:52 np0005532761 systemd[1]: var-lib-containers-storage-overlay-af803429fc7b768dd434c5c7c2ab1e0f625a1d76a00262ed2efe37580bfbb70c-merged.mount: Deactivated successfully.
Nov 23 16:21:52 np0005532761 podman[286358]: 2025-11-23 21:21:52.406069856 +0000 UTC m=+0.153610708 container remove c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mendeleev, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:21:52 np0005532761 systemd[1]: libpod-conmon-c954916253e435cdd3dd2d953e72477783e0a83255fa93ad1f84bdca7de08e3a.scope: Deactivated successfully.
Nov 23 16:21:52 np0005532761 podman[286397]: 2025-11-23 21:21:52.560827104 +0000 UTC m=+0.046998354 container create 06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 16:21:52 np0005532761 systemd[1]: Started libpod-conmon-06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c.scope.
Nov 23 16:21:52 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:21:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442b11cea9a482081a348ccfaece5da8dfdfa83da75e60220d7d936f7a13876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442b11cea9a482081a348ccfaece5da8dfdfa83da75e60220d7d936f7a13876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442b11cea9a482081a348ccfaece5da8dfdfa83da75e60220d7d936f7a13876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442b11cea9a482081a348ccfaece5da8dfdfa83da75e60220d7d936f7a13876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:52 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442b11cea9a482081a348ccfaece5da8dfdfa83da75e60220d7d936f7a13876/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:52 np0005532761 podman[286397]: 2025-11-23 21:21:52.539054948 +0000 UTC m=+0.025226248 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:21:52 np0005532761 podman[286397]: 2025-11-23 21:21:52.636906947 +0000 UTC m=+0.123078287 container init 06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_wilson, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Nov 23 16:21:52 np0005532761 podman[286397]: 2025-11-23 21:21:52.647430289 +0000 UTC m=+0.133601539 container start 06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_wilson, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:21:52 np0005532761 podman[286397]: 2025-11-23 21:21:52.657944532 +0000 UTC m=+0.144115812 container attach 06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:21:53 np0005532761 interesting_wilson[286414]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:21:53 np0005532761 interesting_wilson[286414]: --> All data devices are unavailable
Nov 23 16:21:53 np0005532761 systemd[1]: libpod-06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c.scope: Deactivated successfully.
Nov 23 16:21:53 np0005532761 podman[286397]: 2025-11-23 21:21:53.045516493 +0000 UTC m=+0.531687753 container died 06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:21:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a442b11cea9a482081a348ccfaece5da8dfdfa83da75e60220d7d936f7a13876-merged.mount: Deactivated successfully.
Nov 23 16:21:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:53.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:53 np0005532761 podman[286397]: 2025-11-23 21:21:53.093697467 +0000 UTC m=+0.579868727 container remove 06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:21:53 np0005532761 systemd[1]: libpod-conmon-06a88fdc19076c21191f471677570f3eb6f5e7a23587492ccaee161829761f2c.scope: Deactivated successfully.
Nov 23 16:21:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:53 np0005532761 podman[286533]: 2025-11-23 21:21:53.618944636 +0000 UTC m=+0.035010202 container create bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:21:53 np0005532761 systemd[1]: Started libpod-conmon-bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7.scope.
Nov 23 16:21:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:21:53 np0005532761 podman[286533]: 2025-11-23 21:21:53.692491842 +0000 UTC m=+0.108557428 container init bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_nobel, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:21:53 np0005532761 podman[286533]: 2025-11-23 21:21:53.605227088 +0000 UTC m=+0.021292664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:21:53 np0005532761 podman[286533]: 2025-11-23 21:21:53.702258354 +0000 UTC m=+0.118323920 container start bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:21:53 np0005532761 podman[286533]: 2025-11-23 21:21:53.705584193 +0000 UTC m=+0.121649759 container attach bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:21:53 np0005532761 gallant_nobel[286549]: 167 167
Nov 23 16:21:53 np0005532761 systemd[1]: libpod-bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7.scope: Deactivated successfully.
Nov 23 16:21:53 np0005532761 podman[286533]: 2025-11-23 21:21:53.708628135 +0000 UTC m=+0.124693691 container died bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:21:53 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ce4cf3768b25a34de10e2b44e6edf8fd80063b57e92c66ed1f2e215f3dded3c1-merged.mount: Deactivated successfully.
Nov 23 16:21:53 np0005532761 podman[286533]: 2025-11-23 21:21:53.746900283 +0000 UTC m=+0.162965859 container remove bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_nobel, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:21:53 np0005532761 systemd[1]: libpod-conmon-bfdf02ed1ff810dac42bb07d74cb6b48212069681844810f2e5c2e70d28482e7.scope: Deactivated successfully.
Nov 23 16:21:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:21:53 np0005532761 podman[286572]: 2025-11-23 21:21:53.90090583 +0000 UTC m=+0.045645817 container create 952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 16:21:53 np0005532761 systemd[1]: Started libpod-conmon-952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec.scope.
Nov 23 16:21:53 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:21:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a5ecb9210d95eebb96d14987cc5a4a8d693c7d68b7569e34fea3c4c3435da1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a5ecb9210d95eebb96d14987cc5a4a8d693c7d68b7569e34fea3c4c3435da1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a5ecb9210d95eebb96d14987cc5a4a8d693c7d68b7569e34fea3c4c3435da1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:53 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a5ecb9210d95eebb96d14987cc5a4a8d693c7d68b7569e34fea3c4c3435da1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:53 np0005532761 podman[286572]: 2025-11-23 21:21:53.877798949 +0000 UTC m=+0.022538976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:21:53 np0005532761 podman[286572]: 2025-11-23 21:21:53.978557446 +0000 UTC m=+0.123297443 container init 952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:21:53 np0005532761 podman[286572]: 2025-11-23 21:21:53.985891502 +0000 UTC m=+0.130631489 container start 952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lehmann, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 16:21:53 np0005532761 podman[286572]: 2025-11-23 21:21:53.990087915 +0000 UTC m=+0.134827922 container attach 952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 23 16:21:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]: {
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:    "1": [
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:        {
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "devices": [
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "/dev/loop3"
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            ],
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "lv_name": "ceph_lv0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "lv_size": "21470642176",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "name": "ceph_lv0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "tags": {
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.cluster_name": "ceph",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.crush_device_class": "",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.encrypted": "0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.osd_id": "1",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.type": "block",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.vdo": "0",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:                "ceph.with_tpm": "0"
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            },
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "type": "block",
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:            "vg_name": "ceph_vg0"
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:        }
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]:    ]
Nov 23 16:21:54 np0005532761 nice_lehmann[286588]: }
Nov 23 16:21:54 np0005532761 systemd[1]: libpod-952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec.scope: Deactivated successfully.
Nov 23 16:21:54 np0005532761 podman[286572]: 2025-11-23 21:21:54.266391318 +0000 UTC m=+0.411131385 container died 952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lehmann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:21:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a5a5ecb9210d95eebb96d14987cc5a4a8d693c7d68b7569e34fea3c4c3435da1-merged.mount: Deactivated successfully.
Nov 23 16:21:54 np0005532761 podman[286572]: 2025-11-23 21:21:54.311479889 +0000 UTC m=+0.456219876 container remove 952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:21:54 np0005532761 systemd[1]: libpod-conmon-952b0c378ee2a29dadfe7f09fa03d0629ba505935cd06f59de030c2df65f60ec.scope: Deactivated successfully.
Nov 23 16:21:54 np0005532761 podman[286705]: 2025-11-23 21:21:54.839951904 +0000 UTC m=+0.040735505 container create 04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_shaw, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:21:54 np0005532761 systemd[1]: Started libpod-conmon-04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821.scope.
Nov 23 16:21:54 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:21:54 np0005532761 podman[286705]: 2025-11-23 21:21:54.81970238 +0000 UTC m=+0.020486011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:21:54 np0005532761 podman[286705]: 2025-11-23 21:21:54.918958107 +0000 UTC m=+0.119741748 container init 04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_shaw, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:21:54 np0005532761 podman[286705]: 2025-11-23 21:21:54.926777186 +0000 UTC m=+0.127560787 container start 04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_shaw, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:21:54 np0005532761 wizardly_shaw[286721]: 167 167
Nov 23 16:21:54 np0005532761 podman[286705]: 2025-11-23 21:21:54.931207476 +0000 UTC m=+0.131991097 container attach 04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:21:54 np0005532761 systemd[1]: libpod-04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821.scope: Deactivated successfully.
Nov 23 16:21:54 np0005532761 podman[286705]: 2025-11-23 21:21:54.93251511 +0000 UTC m=+0.133298721 container died 04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:21:54 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2f225c701ad298350099270b6b9dd21ab4ee75a3cf63444496139842a115652c-merged.mount: Deactivated successfully.
Nov 23 16:21:54 np0005532761 podman[286705]: 2025-11-23 21:21:54.971966161 +0000 UTC m=+0.172749762 container remove 04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_shaw, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 16:21:54 np0005532761 systemd[1]: libpod-conmon-04d5d6a2a0340359586170e5e3b6c81e6fcc8df878908feb6244d6b2a5f20821.scope: Deactivated successfully.
Nov 23 16:21:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:55 np0005532761 podman[286745]: 2025-11-23 21:21:55.112840314 +0000 UTC m=+0.037256091 container create 77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_robinson, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 16:21:55 np0005532761 systemd[1]: Started libpod-conmon-77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02.scope.
Nov 23 16:21:55 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:21:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f29640b61959b56566042372468822c2c7b54aae608cc26dde9d90ab8abced7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f29640b61959b56566042372468822c2c7b54aae608cc26dde9d90ab8abced7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f29640b61959b56566042372468822c2c7b54aae608cc26dde9d90ab8abced7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:55 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f29640b61959b56566042372468822c2c7b54aae608cc26dde9d90ab8abced7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:21:55 np0005532761 podman[286745]: 2025-11-23 21:21:55.096216157 +0000 UTC m=+0.020631934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:21:55 np0005532761 podman[286745]: 2025-11-23 21:21:55.192414032 +0000 UTC m=+0.116829819 container init 77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 16:21:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:55 np0005532761 podman[286745]: 2025-11-23 21:21:55.201912947 +0000 UTC m=+0.126328714 container start 77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_robinson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:21:55 np0005532761 podman[286745]: 2025-11-23 21:21:55.207201789 +0000 UTC m=+0.131617556 container attach 77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_robinson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:21:55 np0005532761 lvm[286836]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:21:55 np0005532761 lvm[286836]: VG ceph_vg0 finished
Nov 23 16:21:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:21:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:55 np0005532761 laughing_robinson[286761]: {}
Nov 23 16:21:55 np0005532761 systemd[1]: libpod-77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02.scope: Deactivated successfully.
Nov 23 16:21:55 np0005532761 podman[286745]: 2025-11-23 21:21:55.851770843 +0000 UTC m=+0.776186610 container died 77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 16:21:55 np0005532761 systemd[1]: libpod-77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02.scope: Consumed 1.020s CPU time.
Nov 23 16:21:55 np0005532761 systemd[1]: var-lib-containers-storage-overlay-5f29640b61959b56566042372468822c2c7b54aae608cc26dde9d90ab8abced7-merged.mount: Deactivated successfully.
Nov 23 16:21:55 np0005532761 podman[286745]: 2025-11-23 21:21:55.917880169 +0000 UTC m=+0.842295976 container remove 77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_robinson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:21:55 np0005532761 systemd[1]: libpod-conmon-77346500cfb0d03134d4ac0d723549d911eca24732206784b9b7f187b05f9b02.scope: Deactivated successfully.
Nov 23 16:21:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:21:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:21:55 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:56 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:56 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:57.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:21:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:21:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:57.505Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:21:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:21:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:21:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:21:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:58.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:21:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:21:58.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:21:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:21:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:21:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:21:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:21:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:21:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:21:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:59.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:21:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:21:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 16:22:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:22:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:01.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:22:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:01.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 16:22:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:22:03
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', '.nfs', '.mgr', 'vms', 'cephfs.cephfs.data']
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:22:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:22:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:22:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:03.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:22:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:04 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:04 np0005532761 podman[286890]: 2025-11-23 21:22:04.623663616 +0000 UTC m=+0.142255407 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:22:04 np0005532761 podman[286889]: 2025-11-23 21:22:04.638456903 +0000 UTC m=+0.158041310 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 16:22:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:05.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:07.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:22:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:07.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:22:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:07.508Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:22:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:07.508Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:22:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:22:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:08 np0005532761 nova_compute[257263]: 2025-11-23 21:22:08.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:22:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:08.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:22:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:09 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:09.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:09.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:11.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:12 np0005532761 nova_compute[257263]: 2025-11-23 21:22:12.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:12 np0005532761 nova_compute[257263]: 2025-11-23 21:22:12.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:22:12 np0005532761 nova_compute[257263]: 2025-11-23 21:22:12.046 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:22:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:13.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:14 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:14 np0005532761 nova_compute[257263]: 2025-11-23 21:22:14.045 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:15 np0005532761 nova_compute[257263]: 2025-11-23 21:22:15.029 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:15.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:16 np0005532761 nova_compute[257263]: 2025-11-23 21:22:16.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:16 np0005532761 podman[286969]: 2025-11-23 21:22:16.543349717 +0000 UTC m=+0.063552708 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Nov 23 16:22:17 np0005532761 nova_compute[257263]: 2025-11-23 21:22:17.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:17.510Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:22:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:22:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:22:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:22:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:18.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:18 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:18 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:18 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:19 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:19.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:20 np0005532761 nova_compute[257263]: 2025-11-23 21:22:20.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:20 np0005532761 nova_compute[257263]: 2025-11-23 21:22:20.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:22:20 np0005532761 nova_compute[257263]: 2025-11-23 21:22:20.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:21 np0005532761 nova_compute[257263]: 2025-11-23 21:22:21.046 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:21 np0005532761 nova_compute[257263]: 2025-11-23 21:22:21.046 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:22:21 np0005532761 nova_compute[257263]: 2025-11-23 21:22:21.047 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:22:21 np0005532761 nova_compute[257263]: 2025-11-23 21:22:21.074 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:22:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:21.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.065 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.065 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.065 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.065 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.065 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:22:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:22:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:23.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:22:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:23 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:22:23 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304187688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.538 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.708 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.709 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.710 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.710 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:22:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.893 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:22:23 np0005532761 nova_compute[257263]: 2025-11-23 21:22:23.894 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:22:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:24 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.024 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing inventories for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.131 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating ProviderTree inventory for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.131 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating inventory in ProviderTree for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.160 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing aggregate associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.216 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing trait associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.248 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:22:24 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:22:24 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2963775118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.773 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.781 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.800 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.802 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.802 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.803 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:24 np0005532761 nova_compute[257263]: 2025-11-23 21:22:24.803 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:22:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:25.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:27.511Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:22:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:22:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:27 np0005532761 nova_compute[257263]: 2025-11-23 21:22:27.816 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:28.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:29.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.826474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950826500, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 874, "num_deletes": 250, "total_data_size": 1390716, "memory_usage": 1421520, "flush_reason": "Manual Compaction"}
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950833274, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 901115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34567, "largest_seqno": 35440, "table_properties": {"data_size": 897476, "index_size": 1355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9792, "raw_average_key_size": 20, "raw_value_size": 889693, "raw_average_value_size": 1901, "num_data_blocks": 58, "num_entries": 468, "num_filter_entries": 468, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932880, "oldest_key_time": 1763932880, "file_creation_time": 1763932950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 6837 microseconds, and 3585 cpu microseconds.
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.833309) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 901115 bytes OK
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.833325) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.835331) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.835342) EVENT_LOG_v1 {"time_micros": 1763932950835338, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.835356) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1386520, prev total WAL file size 1386520, number of live WAL files 2.
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.835858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303036' seq:72057594037927935, type:22 .. '6D6772737461740031323537' seq:0, type:0; will stop at (end)
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(879KB)], [74(14MB)]
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950835911, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 16082370, "oldest_snapshot_seqno": -1}
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6506 keys, 12457126 bytes, temperature: kUnknown
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950944835, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12457126, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12416824, "index_size": 22912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171446, "raw_average_key_size": 26, "raw_value_size": 12302774, "raw_average_value_size": 1890, "num_data_blocks": 897, "num_entries": 6506, "num_filter_entries": 6506, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763932950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.945043) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12457126 bytes
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.6 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(31.7) write-amplify(13.8) OK, records in: 6993, records dropped: 487 output_compression: NoCompression
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948378) EVENT_LOG_v1 {"time_micros": 1763932950948370, "job": 42, "event": "compaction_finished", "compaction_time_micros": 108974, "compaction_time_cpu_micros": 25455, "output_level": 6, "num_output_files": 1, "total_output_size": 12457126, "num_input_records": 6993, "num_output_records": 6506, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.835770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950949252, "job": 0, "event": "table_file_deletion", "file_number": 76}
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:22:30 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950952054, "job": 0, "event": "table_file_deletion", "file_number": 74}
Nov 23 16:22:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:31.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:33 np0005532761 nova_compute[257263]: 2025-11-23 21:22:33.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:33.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:22:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:22:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:22:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:22:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:22:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:22:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:22:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:22:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:34 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:35.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:35.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:35 np0005532761 podman[287079]: 2025-11-23 21:22:35.5575493 +0000 UTC m=+0.066162908 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:22:35 np0005532761 nova_compute[257263]: 2025-11-23 21:22:35.585 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:35 np0005532761 podman[287078]: 2025-11-23 21:22:35.590708431 +0000 UTC m=+0.103183073 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:22:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:37.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:37.512Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:22:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:22:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:38.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:39 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:39.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:41.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:22:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:22:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:43.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:44 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:45.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 23 16:22:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 23 16:22:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 23 16:22:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 23 16:22:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 23 16:22:45 np0005532761 radosgw[95430]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 23 16:22:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:22:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:47.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:47.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:47.512Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:47 np0005532761 podman[287139]: 2025-11-23 21:22:47.52597504 +0000 UTC m=+0.052103861 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 16:22:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:22:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:22:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:22:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:22:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:22:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:48.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:49 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:49.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 16:22:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:51.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 16:22:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:22:51.881 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:22:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:22:51.882 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:22:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:22:51.882 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:22:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:53.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:53.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 16:22:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:54 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:55.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 16:22:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:22:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 0 B/s wr, 182 op/s
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:22:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:22:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:57.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:22:57 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:22:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:57.514Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:22:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:57.514Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:57] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Nov 23 16:22:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:22:57] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Nov 23 16:22:57 np0005532761 podman[287368]: 2025-11-23 21:22:57.898171233 +0000 UTC m=+0.058475591 container create 125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:22:57 np0005532761 podman[287368]: 2025-11-23 21:22:57.870124349 +0000 UTC m=+0.030428737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:22:57 np0005532761 systemd[1]: Started libpod-conmon-125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380.scope.
Nov 23 16:22:58 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:22:58 np0005532761 podman[287368]: 2025-11-23 21:22:58.047849533 +0000 UTC m=+0.208153901 container init 125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 16:22:58 np0005532761 podman[287368]: 2025-11-23 21:22:58.055654693 +0000 UTC m=+0.215959041 container start 125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:22:58 np0005532761 practical_curran[287384]: 167 167
Nov 23 16:22:58 np0005532761 systemd[1]: libpod-125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380.scope: Deactivated successfully.
Nov 23 16:22:58 np0005532761 podman[287368]: 2025-11-23 21:22:58.068830087 +0000 UTC m=+0.229134445 container attach 125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:22:58 np0005532761 podman[287368]: 2025-11-23 21:22:58.069590998 +0000 UTC m=+0.229895346 container died 125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 16:22:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-b200c18fc88972f89512f4c253e0df1f6e0a328e6e44f48a17f0211413f3a93a-merged.mount: Deactivated successfully.
Nov 23 16:22:58 np0005532761 podman[287368]: 2025-11-23 21:22:58.112903691 +0000 UTC m=+0.273208039 container remove 125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:22:58 np0005532761 systemd[1]: libpod-conmon-125b677a62f75b55395e4e4afd8e2c6a9bbb3123b1e27ecb7edd0ac9c6965380.scope: Deactivated successfully.
Nov 23 16:22:58 np0005532761 podman[287409]: 2025-11-23 21:22:58.273947767 +0000 UTC m=+0.048783421 container create 3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 16:22:58 np0005532761 systemd[1]: Started libpod-conmon-3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf.scope.
Nov 23 16:22:58 np0005532761 podman[287409]: 2025-11-23 21:22:58.250340663 +0000 UTC m=+0.025176317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:22:58 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:22:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06beeb2dbf0b7c94b0fc273c408ecdd2ae7409076515f2ab28ed7ceda751342/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:22:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06beeb2dbf0b7c94b0fc273c408ecdd2ae7409076515f2ab28ed7ceda751342/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:22:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06beeb2dbf0b7c94b0fc273c408ecdd2ae7409076515f2ab28ed7ceda751342/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:22:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06beeb2dbf0b7c94b0fc273c408ecdd2ae7409076515f2ab28ed7ceda751342/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:22:58 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06beeb2dbf0b7c94b0fc273c408ecdd2ae7409076515f2ab28ed7ceda751342/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:22:58 np0005532761 podman[287409]: 2025-11-23 21:22:58.374466306 +0000 UTC m=+0.149302010 container init 3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ride, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:22:58 np0005532761 podman[287409]: 2025-11-23 21:22:58.38052398 +0000 UTC m=+0.155359604 container start 3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ride, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 16:22:58 np0005532761 podman[287409]: 2025-11-23 21:22:58.40994086 +0000 UTC m=+0.184776534 container attach 3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 16:22:58 np0005532761 recursing_ride[287425]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:22:58 np0005532761 recursing_ride[287425]: --> All data devices are unavailable
Nov 23 16:22:58 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:22:58 np0005532761 systemd[1]: libpod-3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf.scope: Deactivated successfully.
Nov 23 16:22:58 np0005532761 podman[287409]: 2025-11-23 21:22:58.728581159 +0000 UTC m=+0.503416833 container died 3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ride, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 23 16:22:58 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c06beeb2dbf0b7c94b0fc273c408ecdd2ae7409076515f2ab28ed7ceda751342-merged.mount: Deactivated successfully.
Nov 23 16:22:58 np0005532761 podman[287409]: 2025-11-23 21:22:58.783833073 +0000 UTC m=+0.558668737 container remove 3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:22:58 np0005532761 systemd[1]: libpod-conmon-3e721f3d833a78be95d66379da5bef3fc29bd36d4da7a94f4b55ff8a930495cf.scope: Deactivated successfully.
Nov 23 16:22:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:22:58.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:22:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:22:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:22:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:22:59 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:22:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:22:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:59.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 0 B/s wr, 182 op/s
Nov 23 16:22:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:22:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:59.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:59 np0005532761 podman[287548]: 2025-11-23 21:22:59.51186686 +0000 UTC m=+0.056791337 container create eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 16:22:59 np0005532761 systemd[1]: Started libpod-conmon-eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6.scope.
Nov 23 16:22:59 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:22:59 np0005532761 podman[287548]: 2025-11-23 21:22:59.567433582 +0000 UTC m=+0.112358059 container init eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bardeen, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 16:22:59 np0005532761 podman[287548]: 2025-11-23 21:22:59.573999388 +0000 UTC m=+0.118923845 container start eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bardeen, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:22:59 np0005532761 podman[287548]: 2025-11-23 21:22:59.576711971 +0000 UTC m=+0.121636438 container attach eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bardeen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 16:22:59 np0005532761 exciting_bardeen[287566]: 167 167
Nov 23 16:22:59 np0005532761 systemd[1]: libpod-eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6.scope: Deactivated successfully.
Nov 23 16:22:59 np0005532761 podman[287548]: 2025-11-23 21:22:59.580988616 +0000 UTC m=+0.125913083 container died eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:22:59 np0005532761 podman[287548]: 2025-11-23 21:22:59.492491819 +0000 UTC m=+0.037416326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:22:59 np0005532761 systemd[1]: var-lib-containers-storage-overlay-e81060a208a6e2b7216b289070dd82c6303c84d961ebc3df3f73d373489ffa62-merged.mount: Deactivated successfully.
Nov 23 16:22:59 np0005532761 podman[287548]: 2025-11-23 21:22:59.611858665 +0000 UTC m=+0.156783122 container remove eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:22:59 np0005532761 systemd[1]: libpod-conmon-eb0feb8776b0c7f47bcee8002b27d0d6f1cf4ee19c73bc00ad62e027c4a8dcc6.scope: Deactivated successfully.
Nov 23 16:22:59 np0005532761 podman[287590]: 2025-11-23 21:22:59.804986233 +0000 UTC m=+0.032117764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:22:59 np0005532761 podman[287590]: 2025-11-23 21:22:59.922646773 +0000 UTC m=+0.149778234 container create b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 16:22:59 np0005532761 systemd[1]: Started libpod-conmon-b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d.scope.
Nov 23 16:23:00 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:23:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a516a29bafbbfc17123c668e39f861e38668f1bef082aeb45fa0a45941e4f3d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a516a29bafbbfc17123c668e39f861e38668f1bef082aeb45fa0a45941e4f3d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a516a29bafbbfc17123c668e39f861e38668f1bef082aeb45fa0a45941e4f3d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:00 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a516a29bafbbfc17123c668e39f861e38668f1bef082aeb45fa0a45941e4f3d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:00 np0005532761 podman[287590]: 2025-11-23 21:23:00.063130067 +0000 UTC m=+0.290261618 container init b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Nov 23 16:23:00 np0005532761 podman[287590]: 2025-11-23 21:23:00.069505428 +0000 UTC m=+0.296636889 container start b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 16:23:00 np0005532761 podman[287590]: 2025-11-23 21:23:00.079722382 +0000 UTC m=+0.306853833 container attach b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]: {
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:    "1": [
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:        {
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "devices": [
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "/dev/loop3"
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            ],
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "lv_name": "ceph_lv0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "lv_size": "21470642176",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "name": "ceph_lv0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "tags": {
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.cluster_name": "ceph",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.crush_device_class": "",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.encrypted": "0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.osd_id": "1",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.type": "block",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.vdo": "0",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:                "ceph.with_tpm": "0"
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            },
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "type": "block",
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:            "vg_name": "ceph_vg0"
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:        }
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]:    ]
Nov 23 16:23:00 np0005532761 wonderful_liskov[287606]: }
Nov 23 16:23:00 np0005532761 systemd[1]: libpod-b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d.scope: Deactivated successfully.
Nov 23 16:23:00 np0005532761 podman[287590]: 2025-11-23 21:23:00.351313848 +0000 UTC m=+0.578445339 container died b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:23:00 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a516a29bafbbfc17123c668e39f861e38668f1bef082aeb45fa0a45941e4f3d4-merged.mount: Deactivated successfully.
Nov 23 16:23:00 np0005532761 podman[287590]: 2025-11-23 21:23:00.42953434 +0000 UTC m=+0.656665831 container remove b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 23 16:23:00 np0005532761 systemd[1]: libpod-conmon-b71644f75decb0ac70ecaf5287acc0222cb604d390d2424f030321e51bc2cf9d.scope: Deactivated successfully.
Nov 23 16:23:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:00 np0005532761 podman[287719]: 2025-11-23 21:23:00.981847516 +0000 UTC m=+0.039127222 container create ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_meitner, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:23:01 np0005532761 systemd[1]: Started libpod-conmon-ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5.scope.
Nov 23 16:23:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:23:01 np0005532761 podman[287719]: 2025-11-23 21:23:00.965808765 +0000 UTC m=+0.023088491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:23:01 np0005532761 podman[287719]: 2025-11-23 21:23:01.066271573 +0000 UTC m=+0.123551309 container init ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:23:01 np0005532761 podman[287719]: 2025-11-23 21:23:01.077121855 +0000 UTC m=+0.134401561 container start ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_meitner, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:23:01 np0005532761 podman[287719]: 2025-11-23 21:23:01.081358749 +0000 UTC m=+0.138638475 container attach ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_meitner, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 16:23:01 np0005532761 intelligent_meitner[287735]: 167 167
Nov 23 16:23:01 np0005532761 systemd[1]: libpod-ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5.scope: Deactivated successfully.
Nov 23 16:23:01 np0005532761 podman[287719]: 2025-11-23 21:23:01.08329083 +0000 UTC m=+0.140570536 container died ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:23:01 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3cd542344b733bd95095cad41981897dacd3779a9904d3f027b89c8ec1b62be1-merged.mount: Deactivated successfully.
Nov 23 16:23:01 np0005532761 podman[287719]: 2025-11-23 21:23:01.122553455 +0000 UTC m=+0.179833171 container remove ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_meitner, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:23:01 np0005532761 systemd[1]: libpod-conmon-ebeb10b765ed224d269fb4bc2a49b7969d0b60ee56d2fa5e28d43d112730b9b5.scope: Deactivated successfully.
Nov 23 16:23:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:23:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:01.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:01 np0005532761 podman[287761]: 2025-11-23 21:23:01.310129154 +0000 UTC m=+0.039427810 container create 07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:23:01 np0005532761 systemd[1]: Started libpod-conmon-07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9.scope.
Nov 23 16:23:01 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:23:01 np0005532761 podman[287761]: 2025-11-23 21:23:01.294488364 +0000 UTC m=+0.023787040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:23:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce8e4cdaac0a6071fda4e5c43c0fc7868fa27010fc9833e31f97ce488589778/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce8e4cdaac0a6071fda4e5c43c0fc7868fa27010fc9833e31f97ce488589778/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce8e4cdaac0a6071fda4e5c43c0fc7868fa27010fc9833e31f97ce488589778/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:01 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce8e4cdaac0a6071fda4e5c43c0fc7868fa27010fc9833e31f97ce488589778/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:23:01 np0005532761 podman[287761]: 2025-11-23 21:23:01.410159651 +0000 UTC m=+0.139458327 container init 07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:23:01 np0005532761 podman[287761]: 2025-11-23 21:23:01.416921002 +0000 UTC m=+0.146219658 container start 07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:23:01 np0005532761 podman[287761]: 2025-11-23 21:23:01.419529672 +0000 UTC m=+0.148828328 container attach 07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 16:23:02 np0005532761 lvm[287851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:23:02 np0005532761 lvm[287851]: VG ceph_vg0 finished
Nov 23 16:23:02 np0005532761 bold_mclean[287777]: {}
Nov 23 16:23:02 np0005532761 systemd[1]: libpod-07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9.scope: Deactivated successfully.
Nov 23 16:23:02 np0005532761 podman[287761]: 2025-11-23 21:23:02.088868962 +0000 UTC m=+0.818167648 container died 07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:23:02 np0005532761 systemd[1]: libpod-07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9.scope: Consumed 1.111s CPU time.
Nov 23 16:23:02 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3ce8e4cdaac0a6071fda4e5c43c0fc7868fa27010fc9833e31f97ce488589778-merged.mount: Deactivated successfully.
Nov 23 16:23:02 np0005532761 podman[287761]: 2025-11-23 21:23:02.127769677 +0000 UTC m=+0.857068333 container remove 07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 16:23:02 np0005532761 systemd[1]: libpod-conmon-07d36a10924a90f3218d3ede41306862d988ba715c4703b0a7d474db239051d9.scope: Deactivated successfully.
Nov 23 16:23:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:23:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:23:02 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:23:02 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:23:02 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:23:02 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:23:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:03.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:23:03
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'backups']
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:23:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:23:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:23:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:03.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:23:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:23:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:05.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:23:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:23:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:23:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:06 np0005532761 podman[287897]: 2025-11-23 21:23:06.571966016 +0000 UTC m=+0.072386795 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:23:06 np0005532761 podman[287896]: 2025-11-23 21:23:06.600961505 +0000 UTC m=+0.113542141 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:23:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:07.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:23:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:07.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:07.515Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:07] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:23:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:07] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:23:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:23:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2686989106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:23:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:23:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2686989106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:23:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:08.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:09.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:09.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:10 np0005532761 nova_compute[257263]: 2025-11-23 21:23:10.056 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:11.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:11.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:13.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:15 np0005532761 nova_compute[257263]: 2025-11-23 21:23:15.029 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:15.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:16 np0005532761 nova_compute[257263]: 2025-11-23 21:23:16.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:16 np0005532761 nova_compute[257263]: 2025-11-23 21:23:16.959 257267 DEBUG oslo_concurrency.processutils [None req-d89b790b-8376-465b-8448-23090b964ac1 8c34b8adab3049c9b4e37e075333da23 3f8fb5175f85402ba20cf9c6989d47cf - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:23:16 np0005532761 nova_compute[257263]: 2025-11-23 21:23:16.991 257267 DEBUG oslo_concurrency.processutils [None req-d89b790b-8376-465b-8448-23090b964ac1 8c34b8adab3049c9b4e37e075333da23 3f8fb5175f85402ba20cf9c6989d47cf - - default default] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:23:17 np0005532761 nova_compute[257263]: 2025-11-23 21:23:17.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:17.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:17.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:17.517Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:17.517Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:23:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:17.517Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:23:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:17] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:23:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:17] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 23 16:23:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:18 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:23:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:23:18 np0005532761 podman[287978]: 2025-11-23 21:23:18.533658386 +0000 UTC m=+0.056113689 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:23:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:18.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:19 np0005532761 nova_compute[257263]: 2025-11-23 21:23:19.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:19.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:21.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:22 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:23:22.008 164405 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:23:22 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:23:22.009 164405 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:23:22 np0005532761 nova_compute[257263]: 2025-11-23 21:23:22.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:22 np0005532761 nova_compute[257263]: 2025-11-23 21:23:22.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:23:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:23 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:23 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:23 np0005532761 nova_compute[257263]: 2025-11-23 21:23:23.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:23 np0005532761 nova_compute[257263]: 2025-11-23 21:23:23.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:23:23 np0005532761 nova_compute[257263]: 2025-11-23 21:23:23.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:23:23 np0005532761 nova_compute[257263]: 2025-11-23 21:23:23.050 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:23:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:23.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:24 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:23:24.010 164405 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fa015a79-13cd-4722-b3c7-7f2e111a2432, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.051 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.051 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.051 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.051 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.051 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:23:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:25.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:23:25 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338104820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.498 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.661 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.662 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4822MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.662 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.662 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.703 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.703 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:23:25 np0005532761 nova_compute[257263]: 2025-11-23 21:23:25.727 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:23:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:23:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352909187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:23:26 np0005532761 nova_compute[257263]: 2025-11-23 21:23:26.161 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:23:26 np0005532761 nova_compute[257263]: 2025-11-23 21:23:26.166 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:23:26 np0005532761 nova_compute[257263]: 2025-11-23 21:23:26.178 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:23:26 np0005532761 nova_compute[257263]: 2025-11-23 21:23:26.179 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:23:26 np0005532761 nova_compute[257263]: 2025-11-23 21:23:26.180 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:23:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:27.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:27.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:27.518Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:23:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:23:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:28 np0005532761 nova_compute[257263]: 2025-11-23 21:23:28.180 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:28.897Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:28.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:29.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:31.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:31.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:23:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:23:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:33.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:23:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:23:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:23:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:23:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:23:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:23:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:33.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:37.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Nov 23 16:23:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:37.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:37.519Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:37.519Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:37 np0005532761 podman[288093]: 2025-11-23 21:23:37.543558975 +0000 UTC m=+0.052951851 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 16:23:37 np0005532761 podman[288092]: 2025-11-23 21:23:37.632385393 +0000 UTC m=+0.136415054 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:23:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:23:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:23:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:38.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:23:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:38.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:38.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:39.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:40 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:41.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Nov 23 16:23:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:43.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:23:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:45.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:23:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:45.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:45 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:47.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:47.520Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:23:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:47.520Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:23:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:47.521Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:23:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:23:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:23:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:23:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:48.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:23:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:49.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:23:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:49 np0005532761 podman[288149]: 2025-11-23 21:23:49.539330957 +0000 UTC m=+0.058252105 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 23 16:23:50 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:51.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:51.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:23:51.883 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:23:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:23:51.884 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:23:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:23:51.884 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:23:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:53.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.002000053s ======
Nov 23 16:23:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:55.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 23 16:23:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:55 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:57.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:23:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:57.522Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:23:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:23:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:23:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:23:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:23:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:23:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:23:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:23:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:23:58.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:23:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:59.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:23:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:23:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:00 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:01.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:01.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:24:03
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'volumes', 'images', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:24:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:03.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:24:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:24:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:03.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:24:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:24:03 np0005532761 podman[288384]: 2025-11-23 21:24:03.955187964 +0000 UTC m=+0.041543723 container create 93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ptolemy, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 23 16:24:03 np0005532761 systemd[1]: Started libpod-conmon-93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1.scope.
Nov 23 16:24:04 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:24:04 np0005532761 podman[288384]: 2025-11-23 21:24:03.935205325 +0000 UTC m=+0.021561064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:24:04 np0005532761 podman[288384]: 2025-11-23 21:24:04.050667602 +0000 UTC m=+0.137023331 container init 93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:24:04 np0005532761 podman[288384]: 2025-11-23 21:24:04.057655951 +0000 UTC m=+0.144011710 container start 93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ptolemy, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:24:04 np0005532761 podman[288384]: 2025-11-23 21:24:04.061540826 +0000 UTC m=+0.147896575 container attach 93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ptolemy, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:24:04 np0005532761 zealous_ptolemy[288401]: 167 167
Nov 23 16:24:04 np0005532761 systemd[1]: libpod-93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1.scope: Deactivated successfully.
Nov 23 16:24:04 np0005532761 podman[288384]: 2025-11-23 21:24:04.06318484 +0000 UTC m=+0.149540609 container died 93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:24:04 np0005532761 systemd[1]: var-lib-containers-storage-overlay-61e616bfa72efe77a603f7197419d518f0f7addba58dae147b7d05988e6c8e58-merged.mount: Deactivated successfully.
Nov 23 16:24:04 np0005532761 podman[288384]: 2025-11-23 21:24:04.110890598 +0000 UTC m=+0.197246317 container remove 93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:24:04 np0005532761 systemd[1]: libpod-conmon-93d1e8547cf7d1283ee6478c3e9bcd4b4de17dfb1ebb89806cc7fa08516beab1.scope: Deactivated successfully.
Nov 23 16:24:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:24:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:04 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:24:04 np0005532761 podman[288425]: 2025-11-23 21:24:04.346196252 +0000 UTC m=+0.072569811 container create 3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_hamilton, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:24:04 np0005532761 systemd[1]: Started libpod-conmon-3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d.scope.
Nov 23 16:24:04 np0005532761 podman[288425]: 2025-11-23 21:24:04.318970567 +0000 UTC m=+0.045344216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:24:04 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:24:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd1018db4bda107f7c2120f50d7264cafc686e2331473ec392878f06c45d7ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd1018db4bda107f7c2120f50d7264cafc686e2331473ec392878f06c45d7ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd1018db4bda107f7c2120f50d7264cafc686e2331473ec392878f06c45d7ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd1018db4bda107f7c2120f50d7264cafc686e2331473ec392878f06c45d7ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:04 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd1018db4bda107f7c2120f50d7264cafc686e2331473ec392878f06c45d7ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:04 np0005532761 podman[288425]: 2025-11-23 21:24:04.438032452 +0000 UTC m=+0.164406021 container init 3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 16:24:04 np0005532761 podman[288425]: 2025-11-23 21:24:04.451764383 +0000 UTC m=+0.178137942 container start 3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_hamilton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 16:24:04 np0005532761 podman[288425]: 2025-11-23 21:24:04.456020798 +0000 UTC m=+0.182394347 container attach 3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 23 16:24:04 np0005532761 objective_hamilton[288441]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:24:04 np0005532761 objective_hamilton[288441]: --> All data devices are unavailable
Nov 23 16:24:04 np0005532761 systemd[1]: libpod-3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d.scope: Deactivated successfully.
Nov 23 16:24:04 np0005532761 podman[288425]: 2025-11-23 21:24:04.833722236 +0000 UTC m=+0.560095815 container died 3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:24:04 np0005532761 systemd[1]: var-lib-containers-storage-overlay-1cd1018db4bda107f7c2120f50d7264cafc686e2331473ec392878f06c45d7ef-merged.mount: Deactivated successfully.
Nov 23 16:24:04 np0005532761 podman[288425]: 2025-11-23 21:24:04.884080916 +0000 UTC m=+0.610454455 container remove 3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:24:04 np0005532761 systemd[1]: libpod-conmon-3540a6257df7d2a92919d613d57e73a49bf406f4769b73902b188374a445a07d.scope: Deactivated successfully.
Nov 23 16:24:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:05.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:05 np0005532761 ceph-mgr[74869]: [devicehealth INFO root] Check health
Nov 23 16:24:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:24:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:05.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:05 np0005532761 podman[288565]: 2025-11-23 21:24:05.510014057 +0000 UTC m=+0.035070447 container create b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 16:24:05 np0005532761 systemd[1]: Started libpod-conmon-b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153.scope.
Nov 23 16:24:05 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:24:05 np0005532761 podman[288565]: 2025-11-23 21:24:05.573373889 +0000 UTC m=+0.098430309 container init b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:24:05 np0005532761 podman[288565]: 2025-11-23 21:24:05.580900292 +0000 UTC m=+0.105956692 container start b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:24:05 np0005532761 podman[288565]: 2025-11-23 21:24:05.58418596 +0000 UTC m=+0.109242410 container attach b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:24:05 np0005532761 charming_buck[288582]: 167 167
Nov 23 16:24:05 np0005532761 systemd[1]: libpod-b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153.scope: Deactivated successfully.
Nov 23 16:24:05 np0005532761 podman[288565]: 2025-11-23 21:24:05.586103753 +0000 UTC m=+0.111160143 container died b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 16:24:05 np0005532761 podman[288565]: 2025-11-23 21:24:05.495387843 +0000 UTC m=+0.020444253 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:24:05 np0005532761 systemd[1]: var-lib-containers-storage-overlay-586932802b97fe3969899a8e33dec8d0142719861d1a0ba53e026e91215cff2d-merged.mount: Deactivated successfully.
Nov 23 16:24:05 np0005532761 podman[288565]: 2025-11-23 21:24:05.623495753 +0000 UTC m=+0.148552143 container remove b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:24:05 np0005532761 systemd[1]: libpod-conmon-b978017320c8c872ba6d50a52c414d6d6ce119fdcbf38fa944626a367ae8b153.scope: Deactivated successfully.
Nov 23 16:24:05 np0005532761 podman[288606]: 2025-11-23 21:24:05.846987497 +0000 UTC m=+0.054070771 container create 73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_chebyshev, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:24:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:05 np0005532761 systemd[1]: Started libpod-conmon-73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841.scope.
Nov 23 16:24:05 np0005532761 podman[288606]: 2025-11-23 21:24:05.828363944 +0000 UTC m=+0.035447268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:24:05 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:24:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3680fdfbde32b67159fdd6ec30086f4728d1260fa865373eb3b011d971858681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3680fdfbde32b67159fdd6ec30086f4728d1260fa865373eb3b011d971858681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3680fdfbde32b67159fdd6ec30086f4728d1260fa865373eb3b011d971858681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:05 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3680fdfbde32b67159fdd6ec30086f4728d1260fa865373eb3b011d971858681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:05 np0005532761 podman[288606]: 2025-11-23 21:24:05.950890112 +0000 UTC m=+0.157973396 container init 73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 16:24:05 np0005532761 podman[288606]: 2025-11-23 21:24:05.96820987 +0000 UTC m=+0.175293154 container start 73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_chebyshev, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:24:05 np0005532761 podman[288606]: 2025-11-23 21:24:05.975513487 +0000 UTC m=+0.182596761 container attach 73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_chebyshev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]: {
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:    "1": [
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:        {
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "devices": [
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "/dev/loop3"
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            ],
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "lv_name": "ceph_lv0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "lv_size": "21470642176",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "name": "ceph_lv0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "tags": {
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.cluster_name": "ceph",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.crush_device_class": "",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.encrypted": "0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.osd_id": "1",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.type": "block",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.vdo": "0",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:                "ceph.with_tpm": "0"
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            },
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "type": "block",
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:            "vg_name": "ceph_vg0"
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:        }
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]:    ]
Nov 23 16:24:06 np0005532761 priceless_chebyshev[288622]: }
Nov 23 16:24:06 np0005532761 systemd[1]: libpod-73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841.scope: Deactivated successfully.
Nov 23 16:24:06 np0005532761 podman[288606]: 2025-11-23 21:24:06.295713624 +0000 UTC m=+0.502796898 container died 73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_chebyshev, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 23 16:24:06 np0005532761 systemd[1]: var-lib-containers-storage-overlay-3680fdfbde32b67159fdd6ec30086f4728d1260fa865373eb3b011d971858681-merged.mount: Deactivated successfully.
Nov 23 16:24:06 np0005532761 podman[288606]: 2025-11-23 21:24:06.335689913 +0000 UTC m=+0.542773227 container remove 73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_chebyshev, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 23 16:24:06 np0005532761 systemd[1]: libpod-conmon-73a79a13ee477b541c7274e9b9db90a80a8441372930d721ea23b947dea84841.scope: Deactivated successfully.
Nov 23 16:24:07 np0005532761 podman[288735]: 2025-11-23 21:24:07.024904043 +0000 UTC m=+0.052286143 container create 992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cori, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:24:07 np0005532761 systemd[1]: Started libpod-conmon-992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786.scope.
Nov 23 16:24:07 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:24:07 np0005532761 podman[288735]: 2025-11-23 21:24:07.00145921 +0000 UTC m=+0.028841310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:24:07 np0005532761 podman[288735]: 2025-11-23 21:24:07.111185213 +0000 UTC m=+0.138567373 container init 992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:24:07 np0005532761 podman[288735]: 2025-11-23 21:24:07.117454423 +0000 UTC m=+0.144836493 container start 992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cori, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 23 16:24:07 np0005532761 podman[288735]: 2025-11-23 21:24:07.12107619 +0000 UTC m=+0.148458290 container attach 992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 23 16:24:07 np0005532761 flamboyant_cori[288752]: 167 167
Nov 23 16:24:07 np0005532761 systemd[1]: libpod-992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786.scope: Deactivated successfully.
Nov 23 16:24:07 np0005532761 podman[288735]: 2025-11-23 21:24:07.122266892 +0000 UTC m=+0.149648962 container died 992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cori, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:24:07 np0005532761 systemd[1]: var-lib-containers-storage-overlay-104af05fe6bfbf57ce25ccdcf7935c2a1246c90adb20e59a3457a3eb5decab52-merged.mount: Deactivated successfully.
Nov 23 16:24:07 np0005532761 podman[288735]: 2025-11-23 21:24:07.165230662 +0000 UTC m=+0.192612732 container remove 992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cori, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 16:24:07 np0005532761 systemd[1]: libpod-conmon-992f412f9add75062cf269f0f9322946b200e70a7c6a3406e352ad421db99786.scope: Deactivated successfully.
Nov 23 16:24:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:07.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:07 np0005532761 podman[288779]: 2025-11-23 21:24:07.366704213 +0000 UTC m=+0.070915156 container create 1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 23 16:24:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:24:07 np0005532761 systemd[1]: Started libpod-conmon-1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1.scope.
Nov 23 16:24:07 np0005532761 podman[288779]: 2025-11-23 21:24:07.33960541 +0000 UTC m=+0.043816423 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:24:07 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:24:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a8578311b1267463978b9a5600bc218565e95e80c9e979c6a72d9da993780/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a8578311b1267463978b9a5600bc218565e95e80c9e979c6a72d9da993780/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a8578311b1267463978b9a5600bc218565e95e80c9e979c6a72d9da993780/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:07 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a8578311b1267463978b9a5600bc218565e95e80c9e979c6a72d9da993780/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:24:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:07 np0005532761 podman[288779]: 2025-11-23 21:24:07.462604822 +0000 UTC m=+0.166815755 container init 1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 16:24:07 np0005532761 podman[288779]: 2025-11-23 21:24:07.472413817 +0000 UTC m=+0.176624730 container start 1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 23 16:24:07 np0005532761 podman[288779]: 2025-11-23 21:24:07.47733898 +0000 UTC m=+0.181549893 container attach 1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:24:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:07.523Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:24:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:07.525Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:24:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:24:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:24:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:08 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:08 np0005532761 lvm[288886]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:24:08 np0005532761 lvm[288886]: VG ceph_vg0 finished
Nov 23 16:24:08 np0005532761 friendly_feynman[288796]: {}
Nov 23 16:24:08 np0005532761 systemd[1]: libpod-1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1.scope: Deactivated successfully.
Nov 23 16:24:08 np0005532761 systemd[1]: libpod-1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1.scope: Consumed 1.115s CPU time.
Nov 23 16:24:08 np0005532761 podman[288871]: 2025-11-23 21:24:08.192947153 +0000 UTC m=+0.089482337 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:24:08 np0005532761 podman[288870]: 2025-11-23 21:24:08.205602255 +0000 UTC m=+0.108428660 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:24:08 np0005532761 podman[288919]: 2025-11-23 21:24:08.232148142 +0000 UTC m=+0.026882808 container died 1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 16:24:08 np0005532761 systemd[1]: var-lib-containers-storage-overlay-c79a8578311b1267463978b9a5600bc218565e95e80c9e979c6a72d9da993780-merged.mount: Deactivated successfully.
Nov 23 16:24:08 np0005532761 podman[288919]: 2025-11-23 21:24:08.273283682 +0000 UTC m=+0.068018268 container remove 1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feynman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:24:08 np0005532761 systemd[1]: libpod-conmon-1940930834caa6e57946317272859966992057db9340aec43f301a6939ee71d1.scope: Deactivated successfully.
Nov 23 16:24:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:24:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:24:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:08 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:08.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:09.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:24:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:09.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:10 np0005532761 nova_compute[257263]: 2025-11-23 21:24:10.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:11.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:24:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:11.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:13 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:13 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:13.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:24:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:24:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:13.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:24:15 np0005532761 nova_compute[257263]: 2025-11-23 21:24:15.029 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:15.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:15.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:17 np0005532761 nova_compute[257263]: 2025-11-23 21:24:17.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:24:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:24:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:17.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:17.526Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:24:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:24:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:18 np0005532761 nova_compute[257263]: 2025-11-23 21:24:18.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:24:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:24:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:18.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:19 np0005532761 nova_compute[257263]: 2025-11-23 21:24:19.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:19.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:19.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:20 np0005532761 podman[288998]: 2025-11-23 21:24:20.569327992 +0000 UTC m=+0.083771493 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:24:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:21.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:21.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:22 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:22 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:23 np0005532761 nova_compute[257263]: 2025-11-23 21:24:23.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:23 np0005532761 nova_compute[257263]: 2025-11-23 21:24:23.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:24:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:23.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:23.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:24 np0005532761 nova_compute[257263]: 2025-11-23 21:24:24.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:24 np0005532761 nova_compute[257263]: 2025-11-23 21:24:24.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:24:24 np0005532761 nova_compute[257263]: 2025-11-23 21:24:24.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:24:24 np0005532761 nova_compute[257263]: 2025-11-23 21:24:24.050 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:24:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:25.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:25.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:25 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.060 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.061 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.061 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.061 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.061 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:24:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:24:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1597342613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.538 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.779 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.781 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4816MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.782 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.782 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.856 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.857 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:24:26 np0005532761 nova_compute[257263]: 2025-11-23 21:24:26.880 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:27 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=cleanup t=2025-11-23T21:24:27.152844291Z level=info msg="Completed cleanup jobs" duration=8.863869ms
Nov 23 16:24:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:27.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=plugins.update.checker t=2025-11-23T21:24:27.270614471Z level=info msg="Update check succeeded" duration=68.50183ms
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0[105415]: logger=grafana.update.checker t=2025-11-23T21:24:27.270727534Z level=info msg="Update check succeeded" duration=63.413472ms
Nov 23 16:24:27 np0005532761 nova_compute[257263]: 2025-11-23 21:24:27.352 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:24:27 np0005532761 nova_compute[257263]: 2025-11-23 21:24:27.361 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:24:27 np0005532761 nova_compute[257263]: 2025-11-23 21:24:27.379 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:24:27 np0005532761 nova_compute[257263]: 2025-11-23 21:24:27.382 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:24:27 np0005532761 nova_compute[257263]: 2025-11-23 21:24:27.383 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:24:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:27.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:27.528Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:24:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:24:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:28.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:24:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:29.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:24:29 np0005532761 nova_compute[257263]: 2025-11-23 21:24:29.384 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:29.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:31.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:31.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:32 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:24:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:24:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:33.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:24:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:24:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:24:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:24:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:24:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:24:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:33.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:35.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:36 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:37.528Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:24:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:24:38 np0005532761 nova_compute[257263]: 2025-11-23 21:24:38.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:38.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:38 np0005532761 podman[289108]: 2025-11-23 21:24:38.549487753 +0000 UTC m=+0.061018809 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:24:38 np0005532761 podman[289107]: 2025-11-23 21:24:38.597610632 +0000 UTC m=+0.105461288 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 16:24:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:38.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:40.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:41 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:42 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:46 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:47.529Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:24:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:24:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:24:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:24:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:48.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:24:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:49.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:24:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:51 np0005532761 podman[289192]: 2025-11-23 21:24:51.5839945 +0000 UTC m=+0.095890330 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 23 16:24:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:24:51.885 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:24:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:24:51.885 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:24:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:24:51.886 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:24:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:51 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:52 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:52.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:53.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:55.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:24:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.682027) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096682078, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1470, "num_deletes": 251, "total_data_size": 2737237, "memory_usage": 2782400, "flush_reason": "Manual Compaction"}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096711141, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2679315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35441, "largest_seqno": 36910, "table_properties": {"data_size": 2672517, "index_size": 3933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14367, "raw_average_key_size": 20, "raw_value_size": 2658824, "raw_average_value_size": 3713, "num_data_blocks": 171, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932950, "oldest_key_time": 1763932950, "file_creation_time": 1763933096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 29174 microseconds, and 10411 cpu microseconds.
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.711201) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2679315 bytes OK
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.711226) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712690) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712709) EVENT_LOG_v1 {"time_micros": 1763933096712703, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712731) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2730944, prev total WAL file size 2730944, number of live WAL files 2.
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.714165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2616KB)], [77(11MB)]
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096714217, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15136441, "oldest_snapshot_seqno": -1}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6706 keys, 12986911 bytes, temperature: kUnknown
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096815250, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12986911, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12944939, "index_size": 24132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 176301, "raw_average_key_size": 26, "raw_value_size": 12826888, "raw_average_value_size": 1912, "num_data_blocks": 946, "num_entries": 6706, "num_filter_entries": 6706, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763933096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.815550) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12986911 bytes
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.816969) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.7 rd, 128.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.9 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 7222, records dropped: 516 output_compression: NoCompression
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.816997) EVENT_LOG_v1 {"time_micros": 1763933096816984, "job": 44, "event": "compaction_finished", "compaction_time_micros": 101115, "compaction_time_cpu_micros": 47323, "output_level": 6, "num_output_files": 1, "total_output_size": 12986911, "num_input_records": 7222, "num_output_records": 6706, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096818087, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096822571, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.714024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.822686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.822694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.822697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.822700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:24:56.822703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:24:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:24:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:24:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:24:56 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:24:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:57.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:24:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:57.530Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:24:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:24:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:24:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:24:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:58.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:24:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:24:58.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:24:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:24:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:59.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:01 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:01 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:01.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:02.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:25:03
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'vms', 'backups', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:25:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:25:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:25:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:03.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:25:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:25:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:04.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:06 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:25:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:25:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:07.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:07.531Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:25:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:25:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:08.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:08 np0005532761 podman[289254]: 2025-11-23 21:25:08.896615784 +0000 UTC m=+0.097083152 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 16:25:08 np0005532761 podman[289255]: 2025-11-23 21:25:08.897271121 +0000 UTC m=+0.095245502 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 16:25:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:08.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:09.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:25:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:25:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:10 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:25:10 np0005532761 podman[289471]: 2025-11-23 21:25:10.646030382 +0000 UTC m=+0.102952611 container create 4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_borg, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:25:10 np0005532761 podman[289471]: 2025-11-23 21:25:10.566885595 +0000 UTC m=+0.023807894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:25:10 np0005532761 systemd[1]: Started libpod-conmon-4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0.scope.
Nov 23 16:25:10 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:25:10 np0005532761 podman[289471]: 2025-11-23 21:25:10.831535841 +0000 UTC m=+0.288458050 container init 4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 16:25:10 np0005532761 podman[289471]: 2025-11-23 21:25:10.844185572 +0000 UTC m=+0.301107761 container start 4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:25:10 np0005532761 interesting_borg[289488]: 167 167
Nov 23 16:25:10 np0005532761 systemd[1]: libpod-4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0.scope: Deactivated successfully.
Nov 23 16:25:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:11 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:11 np0005532761 podman[289471]: 2025-11-23 21:25:11.050506953 +0000 UTC m=+0.507429192 container attach 4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:25:11 np0005532761 podman[289471]: 2025-11-23 21:25:11.051134721 +0000 UTC m=+0.508056930 container died 4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_borg, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 16:25:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:11.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:25:11 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a7e8461c259e80351c083524e9d5eab0b752b5f50ee0d4bb07634c528fdfbf0e-merged.mount: Deactivated successfully.
Nov 23 16:25:12 np0005532761 nova_compute[257263]: 2025-11-23 21:25:12.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:12.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:12 np0005532761 podman[289471]: 2025-11-23 21:25:12.136710903 +0000 UTC m=+1.593633122 container remove 4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_borg, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:25:12 np0005532761 systemd[1]: libpod-conmon-4e68f4d2a5d01c7dc9ce72716c010af95907852245b151dcf22cb69b727d2ba0.scope: Deactivated successfully.
Nov 23 16:25:12 np0005532761 podman[289513]: 2025-11-23 21:25:12.38137532 +0000 UTC m=+0.040404913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:25:12 np0005532761 podman[289513]: 2025-11-23 21:25:12.48507396 +0000 UTC m=+0.144103523 container create ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_almeida, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:25:12 np0005532761 systemd[1]: Started libpod-conmon-ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db.scope.
Nov 23 16:25:12 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:25:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991bbf6ef12e1b9c25d73ab22af700d46265c2b11f1bf1967131e3e05a1c139f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991bbf6ef12e1b9c25d73ab22af700d46265c2b11f1bf1967131e3e05a1c139f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991bbf6ef12e1b9c25d73ab22af700d46265c2b11f1bf1967131e3e05a1c139f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991bbf6ef12e1b9c25d73ab22af700d46265c2b11f1bf1967131e3e05a1c139f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:12 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991bbf6ef12e1b9c25d73ab22af700d46265c2b11f1bf1967131e3e05a1c139f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:12 np0005532761 podman[289513]: 2025-11-23 21:25:12.987600859 +0000 UTC m=+0.646630522 container init ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:25:13 np0005532761 podman[289513]: 2025-11-23 21:25:13.000300702 +0000 UTC m=+0.659330325 container start ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 16:25:13 np0005532761 podman[289513]: 2025-11-23 21:25:13.015711908 +0000 UTC m=+0.674741571 container attach ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:25:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:13.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:13 np0005532761 modest_almeida[289531]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:25:13 np0005532761 modest_almeida[289531]: --> All data devices are unavailable
Nov 23 16:25:13 np0005532761 systemd[1]: libpod-ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db.scope: Deactivated successfully.
Nov 23 16:25:13 np0005532761 podman[289513]: 2025-11-23 21:25:13.352252315 +0000 UTC m=+1.011281878 container died ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 16:25:13 np0005532761 systemd[1]: var-lib-containers-storage-overlay-991bbf6ef12e1b9c25d73ab22af700d46265c2b11f1bf1967131e3e05a1c139f-merged.mount: Deactivated successfully.
Nov 23 16:25:13 np0005532761 podman[289513]: 2025-11-23 21:25:13.474490966 +0000 UTC m=+1.133520529 container remove ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:25:13 np0005532761 systemd[1]: libpod-conmon-ab325858fcf7baf09cc9855000414d39db51f1819310a39f5aa666c06779a7db.scope: Deactivated successfully.
Nov 23 16:25:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:25:14 np0005532761 podman[289650]: 2025-11-23 21:25:14.028459134 +0000 UTC m=+0.039538679 container create 258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:25:14 np0005532761 systemd[1]: Started libpod-conmon-258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b.scope.
Nov 23 16:25:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:25:14 np0005532761 podman[289650]: 2025-11-23 21:25:14.105293339 +0000 UTC m=+0.116372904 container init 258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaum, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:25:14 np0005532761 podman[289650]: 2025-11-23 21:25:14.0138727 +0000 UTC m=+0.024952265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:25:14 np0005532761 podman[289650]: 2025-11-23 21:25:14.112887643 +0000 UTC m=+0.123967188 container start 258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Nov 23 16:25:14 np0005532761 clever_chaum[289667]: 167 167
Nov 23 16:25:14 np0005532761 systemd[1]: libpod-258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b.scope: Deactivated successfully.
Nov 23 16:25:14 np0005532761 podman[289650]: 2025-11-23 21:25:14.118119985 +0000 UTC m=+0.129199540 container attach 258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaum, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 16:25:14 np0005532761 podman[289650]: 2025-11-23 21:25:14.119750829 +0000 UTC m=+0.130830384 container died 258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaum, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:25:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:25:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:14.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:25:14 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ee674f168f3f266f62b77f14df7999a635135d2b5c303b8d016069ab52101d58-merged.mount: Deactivated successfully.
Nov 23 16:25:14 np0005532761 podman[289650]: 2025-11-23 21:25:14.157472137 +0000 UTC m=+0.168551682 container remove 258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:25:14 np0005532761 systemd[1]: libpod-conmon-258ef61a4086bb2d0cb345594c4037c75929b7c1d98387961458d0879c6faa6b.scope: Deactivated successfully.
Nov 23 16:25:14 np0005532761 podman[289690]: 2025-11-23 21:25:14.347581591 +0000 UTC m=+0.053547767 container create 33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Nov 23 16:25:14 np0005532761 systemd[1]: Started libpod-conmon-33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a.scope.
Nov 23 16:25:14 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:25:14 np0005532761 podman[289690]: 2025-11-23 21:25:14.327454448 +0000 UTC m=+0.033420614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:25:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e692e1ed356940fc71c24f861aa08ef52ac906521c724efc525cf91994df15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e692e1ed356940fc71c24f861aa08ef52ac906521c724efc525cf91994df15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e692e1ed356940fc71c24f861aa08ef52ac906521c724efc525cf91994df15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:14 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e692e1ed356940fc71c24f861aa08ef52ac906521c724efc525cf91994df15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:14 np0005532761 podman[289690]: 2025-11-23 21:25:14.432651438 +0000 UTC m=+0.138617584 container init 33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bohr, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:25:14 np0005532761 podman[289690]: 2025-11-23 21:25:14.441549888 +0000 UTC m=+0.147516064 container start 33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bohr, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 16:25:14 np0005532761 podman[289690]: 2025-11-23 21:25:14.445017592 +0000 UTC m=+0.150983758 container attach 33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 16:25:14 np0005532761 happy_bohr[289706]: {
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:    "1": [
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:        {
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "devices": [
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "/dev/loop3"
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            ],
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "lv_name": "ceph_lv0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "lv_size": "21470642176",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "name": "ceph_lv0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "tags": {
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.cluster_name": "ceph",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.crush_device_class": "",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.encrypted": "0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.osd_id": "1",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.type": "block",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.vdo": "0",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:                "ceph.with_tpm": "0"
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            },
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "type": "block",
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:            "vg_name": "ceph_vg0"
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:        }
Nov 23 16:25:14 np0005532761 happy_bohr[289706]:    ]
Nov 23 16:25:14 np0005532761 happy_bohr[289706]: }
Nov 23 16:25:14 np0005532761 systemd[1]: libpod-33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a.scope: Deactivated successfully.
Nov 23 16:25:14 np0005532761 podman[289690]: 2025-11-23 21:25:14.748720663 +0000 UTC m=+0.454686809 container died 33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Nov 23 16:25:14 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a2e692e1ed356940fc71c24f861aa08ef52ac906521c724efc525cf91994df15-merged.mount: Deactivated successfully.
Nov 23 16:25:14 np0005532761 podman[289690]: 2025-11-23 21:25:14.790307705 +0000 UTC m=+0.496273861 container remove 33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bohr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:25:14 np0005532761 systemd[1]: libpod-conmon-33ed4c070771ee8cf0f32817218ff4a8a514b937c06033b46a4cc4edbf1b670a.scope: Deactivated successfully.
Nov 23 16:25:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:15.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:15 np0005532761 podman[289821]: 2025-11-23 21:25:15.439759302 +0000 UTC m=+0.041318296 container create 08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Nov 23 16:25:15 np0005532761 podman[289821]: 2025-11-23 21:25:15.42045148 +0000 UTC m=+0.022010504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:25:15 np0005532761 systemd[1]: Started libpod-conmon-08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564.scope.
Nov 23 16:25:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:25:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:25:15 np0005532761 podman[289821]: 2025-11-23 21:25:15.58037794 +0000 UTC m=+0.181936954 container init 08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:25:15 np0005532761 podman[289821]: 2025-11-23 21:25:15.588704874 +0000 UTC m=+0.190263868 container start 08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 23 16:25:15 np0005532761 podman[289821]: 2025-11-23 21:25:15.592000213 +0000 UTC m=+0.193559217 container attach 08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bardeen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:25:15 np0005532761 infallible_bardeen[289837]: 167 167
Nov 23 16:25:15 np0005532761 systemd[1]: libpod-08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564.scope: Deactivated successfully.
Nov 23 16:25:15 np0005532761 conmon[289837]: conmon 08f912443ff7f5ad4be8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564.scope/container/memory.events
Nov 23 16:25:15 np0005532761 podman[289821]: 2025-11-23 21:25:15.597175173 +0000 UTC m=+0.198734167 container died 08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:25:15 np0005532761 systemd[1]: var-lib-containers-storage-overlay-a08a057cde750f48299e23183ad6572a605038e5259f54769be1cf51a8615846-merged.mount: Deactivated successfully.
Nov 23 16:25:15 np0005532761 podman[289821]: 2025-11-23 21:25:15.646442283 +0000 UTC m=+0.248001277 container remove 08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:25:15 np0005532761 systemd[1]: libpod-conmon-08f912443ff7f5ad4be8162fe3bb54e8827e55c58422d9d373c3d5a7591bd564.scope: Deactivated successfully.
Nov 23 16:25:15 np0005532761 podman[289860]: 2025-11-23 21:25:15.857293696 +0000 UTC m=+0.055173430 container create 9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:25:15 np0005532761 systemd[1]: Started libpod-conmon-9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea.scope.
Nov 23 16:25:15 np0005532761 podman[289860]: 2025-11-23 21:25:15.839252949 +0000 UTC m=+0.037132713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:25:15 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:25:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab62be84abb58edd5d25cb07184b3faac3be9076fbcf9844dcbd9375cf8f0723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab62be84abb58edd5d25cb07184b3faac3be9076fbcf9844dcbd9375cf8f0723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab62be84abb58edd5d25cb07184b3faac3be9076fbcf9844dcbd9375cf8f0723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:15 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab62be84abb58edd5d25cb07184b3faac3be9076fbcf9844dcbd9375cf8f0723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:25:15 np0005532761 podman[289860]: 2025-11-23 21:25:15.964279596 +0000 UTC m=+0.162159340 container init 9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mccarthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:25:15 np0005532761 podman[289860]: 2025-11-23 21:25:15.977648886 +0000 UTC m=+0.175528640 container start 9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:25:15 np0005532761 podman[289860]: 2025-11-23 21:25:15.981394557 +0000 UTC m=+0.179274341 container attach 9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mccarthy, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:25:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:16 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:16.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:16 np0005532761 lvm[289950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:25:16 np0005532761 lvm[289950]: VG ceph_vg0 finished
Nov 23 16:25:16 np0005532761 adoring_mccarthy[289876]: {}
Nov 23 16:25:16 np0005532761 systemd[1]: libpod-9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea.scope: Deactivated successfully.
Nov 23 16:25:16 np0005532761 systemd[1]: libpod-9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea.scope: Consumed 1.049s CPU time.
Nov 23 16:25:16 np0005532761 podman[289860]: 2025-11-23 21:25:16.630428863 +0000 UTC m=+0.828308607 container died 9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:25:16 np0005532761 systemd[1]: var-lib-containers-storage-overlay-ab62be84abb58edd5d25cb07184b3faac3be9076fbcf9844dcbd9375cf8f0723-merged.mount: Deactivated successfully.
Nov 23 16:25:16 np0005532761 podman[289860]: 2025-11-23 21:25:16.677195826 +0000 UTC m=+0.875075560 container remove 9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:25:16 np0005532761 systemd[1]: libpod-conmon-9daea506f810feb94b67705102d283b68c420a9bce75167aae9623ae39d51cea.scope: Deactivated successfully.
Nov 23 16:25:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:25:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:25:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:17 np0005532761 nova_compute[257263]: 2025-11-23 21:25:17.029 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:17 np0005532761 nova_compute[257263]: 2025-11-23 21:25:17.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:17.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:17.532Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:25:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:17.532Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:25:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:25:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:25:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:17 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:25:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:25:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:18.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:25:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:18.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:25:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:18.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:25:19 np0005532761 nova_compute[257263]: 2025-11-23 21:25:19.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:19.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:25:20 np0005532761 nova_compute[257263]: 2025-11-23 21:25:20.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:20.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:21 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:21 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:21.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:22 np0005532761 podman[289995]: 2025-11-23 21:25:22.597291911 +0000 UTC m=+0.108939553 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 16:25:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:23.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:24 np0005532761 nova_compute[257263]: 2025-11-23 21:25:24.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:24 np0005532761 nova_compute[257263]: 2025-11-23 21:25:24.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:25:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:24.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:25:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:25.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:25:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:26 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:26 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:26 np0005532761 nova_compute[257263]: 2025-11-23 21:25:26.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:26 np0005532761 nova_compute[257263]: 2025-11-23 21:25:26.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:25:26 np0005532761 nova_compute[257263]: 2025-11-23 21:25:26.036 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:25:26 np0005532761 nova_compute[257263]: 2025-11-23 21:25:26.053 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:25:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:26.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.061 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.062 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.062 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.062 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.063 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:25:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:27.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:25:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030739848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.489 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:25:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:27.534Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:25:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:27.534Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.662 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.663 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4824MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.664 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.664 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.726 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.726 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:25:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:25:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:25:27 np0005532761 nova_compute[257263]: 2025-11-23 21:25:27.745 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:25:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:25:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946564833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:25:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:28 np0005532761 nova_compute[257263]: 2025-11-23 21:25:28.167 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:25:28 np0005532761 nova_compute[257263]: 2025-11-23 21:25:28.175 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:25:28 np0005532761 nova_compute[257263]: 2025-11-23 21:25:28.197 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:25:28 np0005532761 nova_compute[257263]: 2025-11-23 21:25:28.200 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:25:28 np0005532761 nova_compute[257263]: 2025-11-23 21:25:28.200 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:25:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:28.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:29.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:30 np0005532761 nova_compute[257263]: 2025-11-23 21:25:30.201 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:31 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:31 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:31.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:32.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:25:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:25:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:25:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:25:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:25:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:25:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:25:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:25:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:33.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:34.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:35.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:36 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:36.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:37.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:37.535Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:25:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:37.535Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:25:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:25:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:25:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:38.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:38.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:25:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:38.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:25:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:39 np0005532761 podman[290103]: 2025-11-23 21:25:39.572134406 +0000 UTC m=+0.074550754 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 16:25:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:39 np0005532761 podman[290102]: 2025-11-23 21:25:39.593138873 +0000 UTC m=+0.103164367 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 16:25:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:40 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:40 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:40 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:40 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:40.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:41.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:42.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:43.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:44.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:47.536Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:25:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:25:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:25:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:25:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:48.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:49.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:50.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:25:51.887 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:25:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:25:51.887 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:25:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:25:51.887 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:25:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:53.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:53 np0005532761 podman[290186]: 2025-11-23 21:25:53.575835395 +0000 UTC m=+0.082157059 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 23 16:25:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:25:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:25:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:54 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:25:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:25:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:55.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:25:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:57.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:57.537Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:25:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:57.537Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:25:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:25:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:25:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:25:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:25:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:58.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:58.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:25:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:25:58.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:25:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:25:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:59.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:25:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:01.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:26:03
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'backups', 'vms', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr']
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:26:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:26:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:26:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:03.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:26:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:26:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:05.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:07.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:07.538Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 23 16:26:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 23 16:26:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:08.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:10.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:10 np0005532761 podman[290226]: 2025-11-23 21:26:10.567637289 +0000 UTC m=+0.073651509 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:26:10 np0005532761 podman[290225]: 2025-11-23 21:26:10.607947027 +0000 UTC m=+0.123270109 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 23 16:26:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:12 np0005532761 nova_compute[257263]: 2025-11-23 21:26:12.035 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:17.540Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:17] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 23 16:26:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:17] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 23 16:26:18 np0005532761 nova_compute[257263]: 2025-11-23 21:26:18.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:26:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:26:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:18.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:18.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:19 np0005532761 nova_compute[257263]: 2025-11-23 21:26:19.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 23 16:26:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 23 16:26:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:20 np0005532761 nova_compute[257263]: 2025-11-23 21:26:20.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:20 np0005532761 nova_compute[257263]: 2025-11-23 21:26:20.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 23 16:26:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:20.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:26:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:26:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:26:21 np0005532761 podman[290484]: 2025-11-23 21:26:21.947940161 +0000 UTC m=+0.071006289 container create 853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 23 16:26:21 np0005532761 systemd[1]: Started libpod-conmon-853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8.scope.
Nov 23 16:26:22 np0005532761 podman[290484]: 2025-11-23 21:26:21.917467898 +0000 UTC m=+0.040534016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:26:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:26:22 np0005532761 podman[290484]: 2025-11-23 21:26:22.05050991 +0000 UTC m=+0.173576058 container init 853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 16:26:22 np0005532761 podman[290484]: 2025-11-23 21:26:22.06342377 +0000 UTC m=+0.186489878 container start 853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:26:22 np0005532761 podman[290484]: 2025-11-23 21:26:22.068598329 +0000 UTC m=+0.191664437 container attach 853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wing, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 16:26:22 np0005532761 jolly_wing[290500]: 167 167
Nov 23 16:26:22 np0005532761 systemd[1]: libpod-853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8.scope: Deactivated successfully.
Nov 23 16:26:22 np0005532761 conmon[290500]: conmon 853487bc9dd1f5f1360f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8.scope/container/memory.events
Nov 23 16:26:22 np0005532761 podman[290484]: 2025-11-23 21:26:22.074335423 +0000 UTC m=+0.197401561 container died 853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 23 16:26:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-70a756c8818b8f847b4a2e08f3ef8f9de2edd032b54bb623b8cafc980ca032e5-merged.mount: Deactivated successfully.
Nov 23 16:26:22 np0005532761 podman[290484]: 2025-11-23 21:26:22.137308844 +0000 UTC m=+0.260374942 container remove 853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:26:22 np0005532761 systemd[1]: libpod-conmon-853487bc9dd1f5f1360f7f3a27e4693f718a3a8e06deb49aaf1f4edb13a19ca8.scope: Deactivated successfully.
Nov 23 16:26:22 np0005532761 podman[290526]: 2025-11-23 21:26:22.339074582 +0000 UTC m=+0.060425763 container create ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_davinci, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 16:26:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:22 np0005532761 systemd[1]: Started libpod-conmon-ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299.scope.
Nov 23 16:26:22 np0005532761 podman[290526]: 2025-11-23 21:26:22.322397342 +0000 UTC m=+0.043748553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:26:22 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:26:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fce9c1f47e8c9d022319c4efd51fbd6e691afc9f01ce4388a596cf337aada8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fce9c1f47e8c9d022319c4efd51fbd6e691afc9f01ce4388a596cf337aada8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fce9c1f47e8c9d022319c4efd51fbd6e691afc9f01ce4388a596cf337aada8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fce9c1f47e8c9d022319c4efd51fbd6e691afc9f01ce4388a596cf337aada8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:22 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fce9c1f47e8c9d022319c4efd51fbd6e691afc9f01ce4388a596cf337aada8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:22 np0005532761 podman[290526]: 2025-11-23 21:26:22.472022152 +0000 UTC m=+0.193373373 container init ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:26:22 np0005532761 podman[290526]: 2025-11-23 21:26:22.482929026 +0000 UTC m=+0.204280227 container start ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 16:26:22 np0005532761 podman[290526]: 2025-11-23 21:26:22.486728459 +0000 UTC m=+0.208079700 container attach ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_davinci, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:26:22 np0005532761 nice_davinci[290542]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:26:22 np0005532761 nice_davinci[290542]: --> All data devices are unavailable
Nov 23 16:26:22 np0005532761 systemd[1]: libpod-ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299.scope: Deactivated successfully.
Nov 23 16:26:22 np0005532761 podman[290526]: 2025-11-23 21:26:22.858166019 +0000 UTC m=+0.579517290 container died ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 16:26:22 np0005532761 systemd[1]: var-lib-containers-storage-overlay-9fce9c1f47e8c9d022319c4efd51fbd6e691afc9f01ce4388a596cf337aada8f-merged.mount: Deactivated successfully.
Nov 23 16:26:22 np0005532761 podman[290526]: 2025-11-23 21:26:22.922081315 +0000 UTC m=+0.643432556 container remove ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 16:26:22 np0005532761 systemd[1]: libpod-conmon-ae685d2b8f735fe078554d1299625093d81bf0a0c8b5248c1e04ce3c9e2b6299.scope: Deactivated successfully.
Nov 23 16:26:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:26:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:23.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:23 np0005532761 podman[290665]: 2025-11-23 21:26:23.569476776 +0000 UTC m=+0.051447251 container create 66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:26:23 np0005532761 systemd[1]: Started libpod-conmon-66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2.scope.
Nov 23 16:26:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:26:23 np0005532761 podman[290665]: 2025-11-23 21:26:23.636527596 +0000 UTC m=+0.118498101 container init 66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:26:23 np0005532761 podman[290665]: 2025-11-23 21:26:23.54778485 +0000 UTC m=+0.029755375 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:26:23 np0005532761 podman[290665]: 2025-11-23 21:26:23.643117214 +0000 UTC m=+0.125087689 container start 66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:26:23 np0005532761 podman[290665]: 2025-11-23 21:26:23.646411823 +0000 UTC m=+0.128382298 container attach 66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 16:26:23 np0005532761 nice_sammet[290683]: 167 167
Nov 23 16:26:23 np0005532761 systemd[1]: libpod-66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2.scope: Deactivated successfully.
Nov 23 16:26:23 np0005532761 podman[290665]: 2025-11-23 21:26:23.648203731 +0000 UTC m=+0.130174206 container died 66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:26:23 np0005532761 systemd[1]: var-lib-containers-storage-overlay-26c82ac11066ac884845e7a454e52c04ae534c6aab809bb511aff5c18c23213d-merged.mount: Deactivated successfully.
Nov 23 16:26:23 np0005532761 podman[290665]: 2025-11-23 21:26:23.687737029 +0000 UTC m=+0.169707504 container remove 66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 23 16:26:23 np0005532761 podman[290682]: 2025-11-23 21:26:23.702638841 +0000 UTC m=+0.078071158 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 16:26:23 np0005532761 systemd[1]: libpod-conmon-66cc68e1a36a3648207a54a768c659e6186842d0ea346cd1f773b30e6c3218e2.scope: Deactivated successfully.
Nov 23 16:26:23 np0005532761 podman[290727]: 2025-11-23 21:26:23.862282702 +0000 UTC m=+0.063993058 container create 7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_aryabhata, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:26:23 np0005532761 systemd[1]: Started libpod-conmon-7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b.scope.
Nov 23 16:26:23 np0005532761 podman[290727]: 2025-11-23 21:26:23.830950966 +0000 UTC m=+0.032661342 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:26:23 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:26:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4606491ac42282195e0d324ef0a7ce0fb0f08485db2cdb0bc42777e25d2b5ae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4606491ac42282195e0d324ef0a7ce0fb0f08485db2cdb0bc42777e25d2b5ae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4606491ac42282195e0d324ef0a7ce0fb0f08485db2cdb0bc42777e25d2b5ae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:23 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4606491ac42282195e0d324ef0a7ce0fb0f08485db2cdb0bc42777e25d2b5ae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:23 np0005532761 podman[290727]: 2025-11-23 21:26:23.951996974 +0000 UTC m=+0.153707350 container init 7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 16:26:23 np0005532761 podman[290727]: 2025-11-23 21:26:23.959028934 +0000 UTC m=+0.160739290 container start 7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 23 16:26:23 np0005532761 podman[290727]: 2025-11-23 21:26:23.962231261 +0000 UTC m=+0.163941617 container attach 7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]: {
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:    "1": [
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:        {
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "devices": [
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "/dev/loop3"
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            ],
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "lv_name": "ceph_lv0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "lv_size": "21470642176",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "name": "ceph_lv0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "tags": {
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.cluster_name": "ceph",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.crush_device_class": "",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.encrypted": "0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.osd_id": "1",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.type": "block",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.vdo": "0",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:                "ceph.with_tpm": "0"
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            },
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "type": "block",
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:            "vg_name": "ceph_vg0"
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:        }
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]:    ]
Nov 23 16:26:24 np0005532761 heuristic_aryabhata[290744]: }
Nov 23 16:26:24 np0005532761 systemd[1]: libpod-7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b.scope: Deactivated successfully.
Nov 23 16:26:24 np0005532761 podman[290727]: 2025-11-23 21:26:24.246892487 +0000 UTC m=+0.448602843 container died 7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 16:26:24 np0005532761 systemd[1]: var-lib-containers-storage-overlay-4606491ac42282195e0d324ef0a7ce0fb0f08485db2cdb0bc42777e25d2b5ae9-merged.mount: Deactivated successfully.
Nov 23 16:26:24 np0005532761 podman[290727]: 2025-11-23 21:26:24.295672525 +0000 UTC m=+0.497382901 container remove 7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:26:24 np0005532761 systemd[1]: libpod-conmon-7d7466195834eb0c7972e025baef9318fea66815ce75cb2b3248c6fcfca4e42b.scope: Deactivated successfully.
Nov 23 16:26:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:24 np0005532761 podman[290857]: 2025-11-23 21:26:24.898379779 +0000 UTC m=+0.051075550 container create 3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:26:24 np0005532761 systemd[1]: Started libpod-conmon-3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70.scope.
Nov 23 16:26:24 np0005532761 podman[290857]: 2025-11-23 21:26:24.87731324 +0000 UTC m=+0.030008991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:26:24 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:26:24 np0005532761 podman[290857]: 2025-11-23 21:26:24.99840565 +0000 UTC m=+0.151101381 container init 3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 16:26:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:25 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:25 np0005532761 podman[290857]: 2025-11-23 21:26:25.006977731 +0000 UTC m=+0.159673462 container start 3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_fermat, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 16:26:25 np0005532761 podman[290857]: 2025-11-23 21:26:25.010333242 +0000 UTC m=+0.163028973 container attach 3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Nov 23 16:26:25 np0005532761 flamboyant_fermat[290873]: 167 167
Nov 23 16:26:25 np0005532761 systemd[1]: libpod-3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70.scope: Deactivated successfully.
Nov 23 16:26:25 np0005532761 podman[290857]: 2025-11-23 21:26:25.014066323 +0000 UTC m=+0.166762084 container died 3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 23 16:26:25 np0005532761 systemd[1]: var-lib-containers-storage-overlay-d3c02ce88b35dfd87bcd60fec8136845215e4509ca5bb714888ee23e68062b8b-merged.mount: Deactivated successfully.
Nov 23 16:26:25 np0005532761 podman[290857]: 2025-11-23 21:26:25.065296356 +0000 UTC m=+0.217992097 container remove 3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_fermat, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:26:25 np0005532761 systemd[1]: libpod-conmon-3f3e45f745a4011d5d8d1d61ea47acc1d48cc3634a94c49c437a3f5ba1c09b70.scope: Deactivated successfully.
Nov 23 16:26:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:25 np0005532761 podman[290898]: 2025-11-23 21:26:25.329460849 +0000 UTC m=+0.074954935 container create 4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_engelbart, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 23 16:26:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:25 np0005532761 systemd[1]: Started libpod-conmon-4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745.scope.
Nov 23 16:26:25 np0005532761 podman[290898]: 2025-11-23 21:26:25.298854742 +0000 UTC m=+0.044348848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:26:25 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:26:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9e87b96463dce32caee63a8fac0f4877be01a3d98bf37152e7921b02c9bb8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9e87b96463dce32caee63a8fac0f4877be01a3d98bf37152e7921b02c9bb8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9e87b96463dce32caee63a8fac0f4877be01a3d98bf37152e7921b02c9bb8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:25 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9e87b96463dce32caee63a8fac0f4877be01a3d98bf37152e7921b02c9bb8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:26:25 np0005532761 podman[290898]: 2025-11-23 21:26:25.454102964 +0000 UTC m=+0.199597120 container init 4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:26:25 np0005532761 podman[290898]: 2025-11-23 21:26:25.466872249 +0000 UTC m=+0.212366335 container start 4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:26:25 np0005532761 podman[290898]: 2025-11-23 21:26:25.471154675 +0000 UTC m=+0.216648821 container attach 4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_engelbart, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:26:26 np0005532761 nova_compute[257263]: 2025-11-23 21:26:26.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:26 np0005532761 nova_compute[257263]: 2025-11-23 21:26:26.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:26:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:26 np0005532761 lvm[290988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:26:26 np0005532761 lvm[290988]: VG ceph_vg0 finished
Nov 23 16:26:26 np0005532761 pedantic_engelbart[290914]: {}
Nov 23 16:26:26 np0005532761 systemd[1]: libpod-4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745.scope: Deactivated successfully.
Nov 23 16:26:26 np0005532761 podman[290898]: 2025-11-23 21:26:26.304886058 +0000 UTC m=+1.050380214 container died 4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_engelbart, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 16:26:26 np0005532761 systemd[1]: libpod-4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745.scope: Consumed 1.421s CPU time.
Nov 23 16:26:26 np0005532761 systemd[1]: var-lib-containers-storage-overlay-cf9e87b96463dce32caee63a8fac0f4877be01a3d98bf37152e7921b02c9bb8f-merged.mount: Deactivated successfully.
Nov 23 16:26:26 np0005532761 podman[290898]: 2025-11-23 21:26:26.351452365 +0000 UTC m=+1.096946421 container remove 4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:26:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:26.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:26 np0005532761 systemd[1]: libpod-conmon-4fcb0ab34dec6751d780fcd359ef818991a896e9a8f2edf836977983cd79c745.scope: Deactivated successfully.
Nov 23 16:26:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:26:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:26:26 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:26 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.061 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.061 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.062 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.062 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.063 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:26:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:26:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:27.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:27 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:26:27 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081108293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:26:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:27.540Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.554 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.701 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.703 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4806MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.703 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.703 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:26:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:26:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.752 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.752 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:26:27 np0005532761 nova_compute[257263]: 2025-11-23 21:26:27.765 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:26:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:26:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828028422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:26:28 np0005532761 nova_compute[257263]: 2025-11-23 21:26:28.215 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:26:28 np0005532761 nova_compute[257263]: 2025-11-23 21:26:28.220 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:26:28 np0005532761 nova_compute[257263]: 2025-11-23 21:26:28.236 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:26:28 np0005532761 nova_compute[257263]: 2025-11-23 21:26:28.237 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:26:28 np0005532761 nova_compute[257263]: 2025-11-23 21:26:28.237 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:26:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:28.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:28.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:29 np0005532761 nova_compute[257263]: 2025-11-23 21:26:29.238 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:29 np0005532761 nova_compute[257263]: 2025-11-23 21:26:29.238 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:26:29 np0005532761 nova_compute[257263]: 2025-11-23 21:26:29.239 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:26:29 np0005532761 nova_compute[257263]: 2025-11-23 21:26:29.253 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:26:29 np0005532761 nova_compute[257263]: 2025-11-23 21:26:29.253 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:26:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:29.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:29 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:30 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:30 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:31 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 16:26:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:31.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:32.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:26:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:26:33 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:26:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:26:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:26:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:26:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:26:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:26:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:34.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:34 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:35 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:35 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:35 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:35.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:36.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:37 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:37.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:37.540Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:26:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:37.541Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:26:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:26:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:26:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:38.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:39 np0005532761 nova_compute[257263]: 2025-11-23 21:26:39.045 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:39 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:39.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:39 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:40 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:40 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:41 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:26:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:41.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:26:41 np0005532761 podman[291116]: 2025-11-23 21:26:41.589774243 +0000 UTC m=+0.088075309 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:26:41 np0005532761 podman[291115]: 2025-11-23 21:26:41.676230137 +0000 UTC m=+0.175273634 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 16:26:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:42.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:43 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:43.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:44.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:44 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:45 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:45 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:45 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:45.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:46.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:47 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:47.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:47.541Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:26:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 23 16:26:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:26:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:26:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:48.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:48.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:49 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:49.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:49 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:50 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:50 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:50.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:51 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:51.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:26:51.888 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:26:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:26:51.888 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:26:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:26:51.888 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:26:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:52.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:53 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:53.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:54.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:54 np0005532761 podman[291199]: 2025-11-23 21:26:54.591448564 +0000 UTC m=+0.095805097 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:26:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:26:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:26:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:26:55 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:55 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:26:55 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:26:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:57 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:57.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:57.542Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:26:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:26:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:26:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:58.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:26:58.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:26:59 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:26:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:26:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:26:59 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:00 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:00 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:01 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:27:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:27:03
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.nfs', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:27:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:27:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:27:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:03.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:27:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:27:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:04.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:04 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:05 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:05 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:05 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:05.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:07 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:07.543Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:27:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:27:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:27:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:27:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:08.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:27:09 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:09 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:10 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:10 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:10.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.108796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231108909, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1434, "num_deletes": 255, "total_data_size": 2685947, "memory_usage": 2727744, "flush_reason": "Manual Compaction"}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231133190, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2630519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36911, "largest_seqno": 38344, "table_properties": {"data_size": 2623758, "index_size": 3896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14204, "raw_average_key_size": 19, "raw_value_size": 2610121, "raw_average_value_size": 3660, "num_data_blocks": 168, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763933097, "oldest_key_time": 1763933097, "file_creation_time": 1763933231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 24398 microseconds, and 11024 cpu microseconds.
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.133249) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2630519 bytes OK
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.133273) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.135784) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.135840) EVENT_LOG_v1 {"time_micros": 1763933231135799, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.135863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2679710, prev total WAL file size 2679710, number of live WAL files 2.
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.137235) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2568KB)], [80(12MB)]
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231137279, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15617430, "oldest_snapshot_seqno": -1}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6891 keys, 15452518 bytes, temperature: kUnknown
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231250017, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15452518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15406736, "index_size": 27430, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 181094, "raw_average_key_size": 26, "raw_value_size": 15282858, "raw_average_value_size": 2217, "num_data_blocks": 1083, "num_entries": 6891, "num_filter_entries": 6891, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930336, "oldest_key_time": 0, "file_creation_time": 1763933231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "856a136e-5a38-4ae3-9b7b-c6eb86cfb78d", "db_session_id": "Q7AUUU8H5P8CM37LFPNC", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.250234) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15452518 bytes
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.251765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.4 rd, 137.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.4 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(11.8) write-amplify(5.9) OK, records in: 7419, records dropped: 528 output_compression: NoCompression
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.251787) EVENT_LOG_v1 {"time_micros": 1763933231251777, "job": 46, "event": "compaction_finished", "compaction_time_micros": 112806, "compaction_time_cpu_micros": 62482, "output_level": 6, "num_output_files": 1, "total_output_size": 15452518, "num_input_records": 7419, "num_output_records": 6891, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231252353, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231254912, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.137143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.254983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.254988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.254990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.254992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532761 ceph-mon[74569]: rocksdb: (Original Log Time 2025/11/23-21:27:11.254994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:11.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:12 np0005532761 podman[291263]: 2025-11-23 21:27:12.579559075 +0000 UTC m=+0.092117649 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 23 16:27:12 np0005532761 podman[291262]: 2025-11-23 21:27:12.58787451 +0000 UTC m=+0.104768001 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 16:27:13 np0005532761 nova_compute[257263]: 2025-11-23 21:27:13.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:13 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:13.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:14.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:14 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:15 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:15 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:15 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:16.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:17 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:17.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:17.545Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:27:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:27:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:27:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:27:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:18.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:18.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:19 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:19.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:19 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:20 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:20 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:20 np0005532761 nova_compute[257263]: 2025-11-23 21:27:20.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:20 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:20 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:20 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:20.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:21 np0005532761 nova_compute[257263]: 2025-11-23 21:27:21.030 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:21 np0005532761 nova_compute[257263]: 2025-11-23 21:27:21.033 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:21 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:21 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:21 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:21 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:21 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:21.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:22 np0005532761 nova_compute[257263]: 2025-11-23 21:27:22.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:22 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:22 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:22 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:22.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:23 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:23 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:23 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:23 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:23.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:24 np0005532761 nova_compute[257263]: 2025-11-23 21:27:24.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:24 np0005532761 nova_compute[257263]: 2025-11-23 21:27:24.034 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:27:24 np0005532761 nova_compute[257263]: 2025-11-23 21:27:24.051 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:27:24 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:24 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:24 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:24.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:25 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:24 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:25 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:25 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:25 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:25 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:25 np0005532761 podman[291321]: 2025-11-23 21:27:25.579605714 +0000 UTC m=+0.093462595 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:27:26 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:26 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:26 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:26 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:26.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:27 np0005532761 nova_compute[257263]: 2025-11-23 21:27:27.051 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:27 np0005532761 nova_compute[257263]: 2025-11-23 21:27:27.051 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:27:27 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:27 np0005532761 podman[291468]: 2025-11-23 21:27:27.447675656 +0000 UTC m=+0.062130349 container exec 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 23 16:27:27 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:27 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:27 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:27.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:27.545Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:27 np0005532761 podman[291468]: 2025-11-23 21:27:27.559762023 +0000 UTC m=+0.174216756 container exec_died 9716c164d9b8adba19f2a9154dc9e4d5386093cc3c5097b1d6ba9919ace0a99f (image=quay.io/ceph/ceph:v19, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:27:27 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:27:27 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.049 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.050 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.050 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.050 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.050 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:27:28 np0005532761 podman[291587]: 2025-11-23 21:27:28.160291128 +0000 UTC m=+0.063311140 container exec c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:27:28 np0005532761 podman[291587]: 2025-11-23 21:27:28.167604025 +0000 UTC m=+0.070624037 container exec_died c6f359f64a41dda14a349cc7bdab4fa821fe626521b76be3a3de5721941a1e05 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:27:28 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:27:28 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428959963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:27:28 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:28 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:27:28 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:28.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.477 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:27:28 np0005532761 podman[291699]: 2025-11-23 21:27:28.510976047 +0000 UTC m=+0.075457029 container exec 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:27:28 np0005532761 podman[291699]: 2025-11-23 21:27:28.527297878 +0000 UTC m=+0.091778840 container exec_died 4216a91ad8e7e6805773724c8a47a67b810f34965f419993bdc8b337987ffc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.672 257267 WARNING nova.virt.libvirt.driver [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.673 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4768MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.673 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.674 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:27:28 np0005532761 podman[291766]: 2025-11-23 21:27:28.766423085 +0000 UTC m=+0.046801115 container exec cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 16:27:28 np0005532761 podman[291766]: 2025-11-23 21:27:28.772795497 +0000 UTC m=+0.053173497 container exec_died cff6c0484e6c503d62b0ac36ed041f7f058522a2a5b0ee9fabd0a27a4fb373a9 (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-0-uvukit)
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.859 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.859 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:27:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:28.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:27:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:28.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:27:28 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:28.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:27:28 np0005532761 nova_compute[257263]: 2025-11-23 21:27:28.937 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing inventories for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:27:28 np0005532761 podman[291830]: 2025-11-23 21:27:28.956501717 +0000 UTC m=+0.046759573 container exec 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 23 16:27:28 np0005532761 podman[291830]: 2025-11-23 21:27:28.968714607 +0000 UTC m=+0.058972433 container exec_died 4984a0943d1c7847614cd205c914660eae61ee8e63161cbae87dd040622f6df7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-0-spcytb, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20, name=keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2)
Nov 23 16:27:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:29 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:28 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.026 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating ProviderTree inventory for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.027 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Updating inventory in ProviderTree for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.040 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing aggregate associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.063 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Refreshing trait associations for resource provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.075 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:27:29 np0005532761 podman[291897]: 2025-11-23 21:27:29.172847139 +0000 UTC m=+0.047495493 container exec 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:27:29 np0005532761 podman[291897]: 2025-11-23 21:27:29.201160383 +0000 UTC m=+0.075808717 container exec_died 8e94825c628375b79ce98e5ca0e1377bb1e60665f91d79c7b0d1606e97499754 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:27:29 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:29 np0005532761 podman[291988]: 2025-11-23 21:27:29.408495552 +0000 UTC m=+0.051581754 container exec 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 16:27:29 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:29 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:29 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:29.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:29 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:27:29 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2645231075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.506 257267 DEBUG oslo_concurrency.processutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.512 257267 DEBUG nova.compute.provider_tree [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c6a407d-d270-4df1-a24d-91d09c3ff1cd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.527 257267 DEBUG nova.scheduler.client.report [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Inventory has not changed for provider 5c6a407d-d270-4df1-a24d-91d09c3ff1cd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.528 257267 DEBUG nova.compute.resource_tracker [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.528 257267 DEBUG oslo_concurrency.lockutils [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:27:29 np0005532761 nova_compute[257263]: 2025-11-23 21:27:29.529 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:29 np0005532761 podman[291988]: 2025-11-23 21:27:29.568889633 +0000 UTC m=+0.211975795 container exec_died 078433944db27ed2a91371ba374ddc9cb9b4f186d03fe84c71902918214410eb (image=quay.io/ceph/grafana:10.4.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 23 16:27:30 np0005532761 podman[292101]: 2025-11-23 21:27:30.030710073 +0000 UTC m=+0.059248731 container exec 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:27:30 np0005532761 podman[292101]: 2025-11-23 21:27:30.06316289 +0000 UTC m=+0.091701538 container exec_died 9411b179805d6108f4bb87f4c7cc4eddf05775170e08dfb6e5f0ba36914cb0a6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:30 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:30 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:27:30 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:30.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:30 np0005532761 nova_compute[257263]: 2025-11-23 21:27:30.542 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:30 np0005532761 nova_compute[257263]: 2025-11-23 21:27:30.543 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:27:30 np0005532761 nova_compute[257263]: 2025-11-23 21:27:30.543 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:27:30 np0005532761 nova_compute[257263]: 2025-11-23 21:27:30.558 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:27:30 np0005532761 nova_compute[257263]: 2025-11-23 21:27:30.558 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:27:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Nov 23 16:27:30 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:27:30 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:27:31 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:31 np0005532761 podman[292341]: 2025-11-23 21:27:31.390791458 +0000 UTC m=+0.058676935 container create 4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:27:31 np0005532761 systemd[1]: Started libpod-conmon-4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b.scope.
Nov 23 16:27:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:27:31 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:31 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:31 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:31 np0005532761 podman[292341]: 2025-11-23 21:27:31.457471109 +0000 UTC m=+0.125356596 container init 4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_edison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:27:31 np0005532761 podman[292341]: 2025-11-23 21:27:31.371669731 +0000 UTC m=+0.039555308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:27:31 np0005532761 podman[292341]: 2025-11-23 21:27:31.467392257 +0000 UTC m=+0.135277744 container start 4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 16:27:31 np0005532761 podman[292341]: 2025-11-23 21:27:31.469954186 +0000 UTC m=+0.137839673 container attach 4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_edison, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 23 16:27:31 np0005532761 eloquent_edison[292358]: 167 167
Nov 23 16:27:31 np0005532761 systemd[1]: libpod-4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b.scope: Deactivated successfully.
Nov 23 16:27:31 np0005532761 podman[292341]: 2025-11-23 21:27:31.473552193 +0000 UTC m=+0.141437680 container died 4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_edison, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:27:31 np0005532761 systemd[1]: var-lib-containers-storage-overlay-2fe5ff4c6711f43c79d3392ca82bfea58d5720d13bd90ea327eef8465921a73f-merged.mount: Deactivated successfully.
Nov 23 16:27:31 np0005532761 podman[292341]: 2025-11-23 21:27:31.515456044 +0000 UTC m=+0.183341531 container remove 4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 23 16:27:31 np0005532761 systemd[1]: libpod-conmon-4722ba3acb0a3ae8fdac976f3a9845940a03b7c8bc2360206138b5e2535b4f2b.scope: Deactivated successfully.
Nov 23 16:27:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:27:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:31 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:27:31 np0005532761 podman[292382]: 2025-11-23 21:27:31.690597943 +0000 UTC m=+0.044048980 container create 5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:27:31 np0005532761 systemd[1]: Started libpod-conmon-5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d.scope.
Nov 23 16:27:31 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:27:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffa6c8dec681159f7dfcad263eb39fd7d217e445f14090770bc906f33853040/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffa6c8dec681159f7dfcad263eb39fd7d217e445f14090770bc906f33853040/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffa6c8dec681159f7dfcad263eb39fd7d217e445f14090770bc906f33853040/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffa6c8dec681159f7dfcad263eb39fd7d217e445f14090770bc906f33853040/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:31 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffa6c8dec681159f7dfcad263eb39fd7d217e445f14090770bc906f33853040/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:31 np0005532761 podman[292382]: 2025-11-23 21:27:31.673351578 +0000 UTC m=+0.026802645 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:27:31 np0005532761 podman[292382]: 2025-11-23 21:27:31.785600449 +0000 UTC m=+0.139051586 container init 5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:27:31 np0005532761 podman[292382]: 2025-11-23 21:27:31.797796858 +0000 UTC m=+0.151247905 container start 5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 16:27:31 np0005532761 podman[292382]: 2025-11-23 21:27:31.801410025 +0000 UTC m=+0.154861102 container attach 5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 16:27:32 np0005532761 practical_pascal[292399]: --> passed data devices: 0 physical, 1 LVM
Nov 23 16:27:32 np0005532761 practical_pascal[292399]: --> All data devices are unavailable
Nov 23 16:27:32 np0005532761 systemd[1]: libpod-5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d.scope: Deactivated successfully.
Nov 23 16:27:32 np0005532761 podman[292382]: 2025-11-23 21:27:32.152192508 +0000 UTC m=+0.505643565 container died 5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:27:32 np0005532761 systemd[1]: var-lib-containers-storage-overlay-dffa6c8dec681159f7dfcad263eb39fd7d217e445f14090770bc906f33853040-merged.mount: Deactivated successfully.
Nov 23 16:27:32 np0005532761 podman[292382]: 2025-11-23 21:27:32.221856659 +0000 UTC m=+0.575307696 container remove 5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:27:32 np0005532761 systemd[1]: libpod-conmon-5a8ef26391a9468852874174149bb5f7b5d9847a0d42ffb6e0620d02778d743d.scope: Deactivated successfully.
Nov 23 16:27:32 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:32 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:32 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:32.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:32 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:27:32 np0005532761 podman[292519]: 2025-11-23 21:27:32.876778573 +0000 UTC m=+0.049545979 container create 0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:27:32 np0005532761 systemd[1]: Started libpod-conmon-0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d.scope.
Nov 23 16:27:32 np0005532761 podman[292519]: 2025-11-23 21:27:32.855634702 +0000 UTC m=+0.028402118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:27:32 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:27:32 np0005532761 podman[292519]: 2025-11-23 21:27:32.975208601 +0000 UTC m=+0.147976007 container init 0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 23 16:27:32 np0005532761 podman[292519]: 2025-11-23 21:27:32.982660262 +0000 UTC m=+0.155427658 container start 0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 16:27:32 np0005532761 podman[292519]: 2025-11-23 21:27:32.985648513 +0000 UTC m=+0.158415939 container attach 0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 16:27:32 np0005532761 systemd[1]: libpod-0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d.scope: Deactivated successfully.
Nov 23 16:27:32 np0005532761 dreamy_hugle[292535]: 167 167
Nov 23 16:27:32 np0005532761 conmon[292535]: conmon 0d7e95309dae2bdb6ba9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d.scope/container/memory.events
Nov 23 16:27:32 np0005532761 podman[292519]: 2025-11-23 21:27:32.989958889 +0000 UTC m=+0.162726295 container died 0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:27:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:32 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:33 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:33 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-aabe6472b2f139f12b9289fbe35926f62dda1087c7c781089b70f644472dfdac-merged.mount: Deactivated successfully.
Nov 23 16:27:33 np0005532761 podman[292519]: 2025-11-23 21:27:33.027827982 +0000 UTC m=+0.200595388 container remove 0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 16:27:33 np0005532761 systemd[1]: libpod-conmon-0d7e95309dae2bdb6ba91f1ad17d1bf905dc6c0b0900726c1a201626631aa20d.scope: Deactivated successfully.
Nov 23 16:27:33 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:27:33 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:27:33 np0005532761 podman[292559]: 2025-11-23 21:27:33.242679124 +0000 UTC m=+0.062250483 container create e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elgamal, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 16:27:33 np0005532761 systemd[1]: Started libpod-conmon-e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8.scope.
Nov 23 16:27:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:27:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:27:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:27:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:27:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:27:33 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:27:33 np0005532761 podman[292559]: 2025-11-23 21:27:33.205350935 +0000 UTC m=+0.024922324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:27:33 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:27:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20df54cacf241865c5b938813947f70ac1a23db1a710625fe353d3c67e7f83e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20df54cacf241865c5b938813947f70ac1a23db1a710625fe353d3c67e7f83e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20df54cacf241865c5b938813947f70ac1a23db1a710625fe353d3c67e7f83e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:33 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20df54cacf241865c5b938813947f70ac1a23db1a710625fe353d3c67e7f83e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:33 np0005532761 podman[292559]: 2025-11-23 21:27:33.333209397 +0000 UTC m=+0.152780776 container init e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 16:27:33 np0005532761 podman[292559]: 2025-11-23 21:27:33.345794177 +0000 UTC m=+0.165365526 container start e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:27:33 np0005532761 podman[292559]: 2025-11-23 21:27:33.349064426 +0000 UTC m=+0.168635785 container attach e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 16:27:33 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:33 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:33 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:33.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]: {
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:    "1": [
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:        {
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "devices": [
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "/dev/loop3"
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            ],
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "lv_name": "ceph_lv0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "lv_size": "21470642176",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=71c99843-04fc-447b-a9fd-4e17520a545c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "lv_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "name": "ceph_lv0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "tags": {
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.block_uuid": "9ESqzA-F5am-q5gu-TeoU-xPeX-H4FI-uV9xbC",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.cephx_lockbox_secret": "",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.cluster_name": "ceph",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.crush_device_class": "",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.encrypted": "0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.osd_fsid": "71c99843-04fc-447b-a9fd-4e17520a545c",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.osd_id": "1",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.type": "block",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.vdo": "0",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:                "ceph.with_tpm": "0"
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            },
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "type": "block",
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:            "vg_name": "ceph_vg0"
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:        }
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]:    ]
Nov 23 16:27:33 np0005532761 inspiring_elgamal[292577]: }
Nov 23 16:27:33 np0005532761 systemd[1]: libpod-e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8.scope: Deactivated successfully.
Nov 23 16:27:33 np0005532761 podman[292559]: 2025-11-23 21:27:33.659658982 +0000 UTC m=+0.479230411 container died e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elgamal, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 16:27:33 np0005532761 systemd[1]: var-lib-containers-storage-overlay-20df54cacf241865c5b938813947f70ac1a23db1a710625fe353d3c67e7f83e2-merged.mount: Deactivated successfully.
Nov 23 16:27:33 np0005532761 podman[292559]: 2025-11-23 21:27:33.715635464 +0000 UTC m=+0.535206823 container remove e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elgamal, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 23 16:27:33 np0005532761 systemd[1]: libpod-conmon-e1459f42abbc80305b1ab4445aaf3a95c48b4fbfa958d85b57533732732a15e8.scope: Deactivated successfully.
Nov 23 16:27:34 np0005532761 nova_compute[257263]: 2025-11-23 21:27:34.034 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:34 np0005532761 nova_compute[257263]: 2025-11-23 21:27:34.035 257267 DEBUG nova.compute.manager [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:27:34 np0005532761 podman[292689]: 2025-11-23 21:27:34.365691967 +0000 UTC m=+0.055080359 container create 597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_wiles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:27:34 np0005532761 systemd[1]: Started libpod-conmon-597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf.scope.
Nov 23 16:27:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:27:34 np0005532761 podman[292689]: 2025-11-23 21:27:34.337588908 +0000 UTC m=+0.026977390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:27:34 np0005532761 podman[292689]: 2025-11-23 21:27:34.438827772 +0000 UTC m=+0.128216194 container init 597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_wiles, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 16:27:34 np0005532761 podman[292689]: 2025-11-23 21:27:34.449322725 +0000 UTC m=+0.138711157 container start 597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 16:27:34 np0005532761 podman[292689]: 2025-11-23 21:27:34.453869608 +0000 UTC m=+0.143258050 container attach 597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 16:27:34 np0005532761 gifted_wiles[292705]: 167 167
Nov 23 16:27:34 np0005532761 systemd[1]: libpod-597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf.scope: Deactivated successfully.
Nov 23 16:27:34 np0005532761 conmon[292705]: conmon 597e4d7d287bf977d425 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf.scope/container/memory.events
Nov 23 16:27:34 np0005532761 podman[292689]: 2025-11-23 21:27:34.457473155 +0000 UTC m=+0.146861587 container died 597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_wiles, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 16:27:34 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:34 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:34 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:34 np0005532761 systemd[1]: var-lib-containers-storage-overlay-55e4e4925ce24fc445704628b6eb1c3876f7a30b4afef69e4feefa7bbeec32fe-merged.mount: Deactivated successfully.
Nov 23 16:27:34 np0005532761 podman[292689]: 2025-11-23 21:27:34.515382209 +0000 UTC m=+0.204770641 container remove 597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 16:27:34 np0005532761 systemd[1]: libpod-conmon-597e4d7d287bf977d425984c5a0a23a191c727f646db87158efcf6ab6e4b5dbf.scope: Deactivated successfully.
Nov 23 16:27:34 np0005532761 podman[292730]: 2025-11-23 21:27:34.747755874 +0000 UTC m=+0.054679228 container create f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 16:27:34 np0005532761 systemd[1]: Started libpod-conmon-f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4.scope.
Nov 23 16:27:34 np0005532761 systemd[1]: Started libcrun container.
Nov 23 16:27:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332da7ef5ca58927b7a63309d957eeb2f34cc7e053a9c93d0ac844e571d06967/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332da7ef5ca58927b7a63309d957eeb2f34cc7e053a9c93d0ac844e571d06967/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332da7ef5ca58927b7a63309d957eeb2f34cc7e053a9c93d0ac844e571d06967/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:34 np0005532761 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332da7ef5ca58927b7a63309d957eeb2f34cc7e053a9c93d0ac844e571d06967/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:27:34 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:27:34 np0005532761 podman[292730]: 2025-11-23 21:27:34.817356202 +0000 UTC m=+0.124279576 container init f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chatelet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:27:34 np0005532761 podman[292730]: 2025-11-23 21:27:34.727001403 +0000 UTC m=+0.033924797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:27:34 np0005532761 podman[292730]: 2025-11-23 21:27:34.829580173 +0000 UTC m=+0.136503527 container start f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chatelet, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 16:27:34 np0005532761 podman[292730]: 2025-11-23 21:27:34.834837165 +0000 UTC m=+0.141760509 container attach f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 16:27:35 np0005532761 lvm[292821]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:27:35 np0005532761 lvm[292821]: VG ceph_vg0 finished
Nov 23 16:27:35 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:35 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:35 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:35.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:35 np0005532761 fervent_chatelet[292746]: {}
Nov 23 16:27:35 np0005532761 systemd[1]: libpod-f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4.scope: Deactivated successfully.
Nov 23 16:27:35 np0005532761 podman[292730]: 2025-11-23 21:27:35.510789787 +0000 UTC m=+0.817713131 container died f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:27:35 np0005532761 systemd[1]: libpod-f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4.scope: Consumed 1.113s CPU time.
Nov 23 16:27:35 np0005532761 systemd[1]: var-lib-containers-storage-overlay-332da7ef5ca58927b7a63309d957eeb2f34cc7e053a9c93d0ac844e571d06967-merged.mount: Deactivated successfully.
Nov 23 16:27:35 np0005532761 podman[292730]: 2025-11-23 21:27:35.559987096 +0000 UTC m=+0.866910480 container remove f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chatelet, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:27:35 np0005532761 systemd[1]: libpod-conmon-f30b1035cf90539cde084645364c0598f6b6b00cf44ca890261907cbbc5d5ae4.scope: Deactivated successfully.
Nov 23 16:27:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 23 16:27:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:35 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 23 16:27:35 np0005532761 ceph-mon[74569]: log_channel(audit) log [INF] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:35 np0005532761 ceph-mon[74569]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:36 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:36 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:36 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:27:36 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:27:36 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:27:37 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:37 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:37 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:37.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:37.546Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:37 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:27:37 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:27:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:37 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:38 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:38 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:38 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:38 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:38.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:38 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 16:27:38 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:38.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:39 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:39 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:39 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:39.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:40 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:40 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:40 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:40.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:40 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 16:27:41 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:41 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:41 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:41 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:41.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:42 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:42 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:42 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:42 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:42 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:43 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:43 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:43 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:43 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:43 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:43 np0005532761 podman[292871]: 2025-11-23 21:27:43.568615884 +0000 UTC m=+0.081415470 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 23 16:27:43 np0005532761 podman[292870]: 2025-11-23 21:27:43.602688444 +0000 UTC m=+0.120257308 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 23 16:27:44 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:44 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:44 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:44 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:45 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:45 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:45 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:45.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:46 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:46 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:46 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:46 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:46.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:46 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:47 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:47 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:47 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:47.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:47.548Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:47 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:27:47 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 23 16:27:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:47 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:48 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:48 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:27:48 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:27:48 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:48 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:27:48 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:48.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:27:48 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:48.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:27:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:48.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:27:48 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:48.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:27:49 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:49 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:49 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:49.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:50 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:50 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:50 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:50 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:51 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:51 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:51 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:51 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:51.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:27:51.889 164405 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:27:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:27:51.889 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:27:51 np0005532761 ovn_metadata_agent[164399]: 2025-11-23 21:27:51.890 164405 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:27:52 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:52 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:52 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:52.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:52 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:52 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:53 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:53 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:53 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:53 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:53 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:53.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:54 np0005532761 systemd-logind[820]: New session 59 of user zuul.
Nov 23 16:27:54 np0005532761 systemd[1]: Started Session 59 of User zuul.
Nov 23 16:27:54 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:54 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:54 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:54 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:27:55 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:55 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:55 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:55.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:56 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:56 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:56 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:56 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:56 np0005532761 podman[293092]: 2025-11-23 21:27:56.603761929 +0000 UTC m=+0.114012529 container health_status c9efced2652cacc68d5a6032ed0cdf8867e05f9e4352c3ba2ccbd31c16134e65 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 16:27:56 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28037 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:27:56 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1403: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:56 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18036 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:27:57 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27001 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:27:57 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28049 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:27:57 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:57 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:57 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:57.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:57 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18051 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:27:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:57.549Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:57 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:57] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Nov 23 16:27:57 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:27:57] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Nov 23 16:27:57 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27013 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:27:57 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 16:27:57 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996954738' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 16:27:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:27:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:27:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:57 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:27:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:27:58 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:27:58 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:58 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:58 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:58 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1404: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:27:58 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:27:58.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:27:59 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:27:59 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:59 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:59.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:00 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:00 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:00 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:00.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:00 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1405: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:28:01 np0005532761 ovs-vsctl[293298]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 23 16:28:01 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:01 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:01 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:01 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:01 np0005532761 virtqemud[256805]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 23 16:28:01 np0005532761 virtqemud[256805]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 23 16:28:01 np0005532761 virtqemud[256805]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 16:28:02 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: cache status {prefix=cache status} (starting...)
Nov 23 16:28:02 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:02 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:02 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:02 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:02.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:02 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: client ls {prefix=client ls} (starting...)
Nov 23 16:28:02 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:02 np0005532761 lvm[293630]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:28:02 np0005532761 lvm[293630]: VG ceph_vg0 finished
Nov 23 16:28:02 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1406: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:28:02 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28082 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:28:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:28:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:02 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:28:03 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:03 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Optimize plan auto_2025-11-23_21:28:03
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] do_upmap
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] pools ['.nfs', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [balancer INFO root] prepared 0/10 upmap changes
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: damage ls {prefix=damage ls} (starting...)
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18090 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364723209' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28106 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump loads {prefix=dump loads} (starting...)
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:03 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:03 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:03.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18102 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:28:03 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2706595844' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28130 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:28:03 np0005532761 ceph-mgr[74869]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 23 16:28:03 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18120 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184255387' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27031 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 23 16:28:04 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28154 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 23 16:28:04 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18153 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27046 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750063702' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 16:28:04 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:04 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:04 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:04.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:04 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: ops {prefix=ops} (starting...)
Nov 23 16:28:04 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28196 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1407: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27058 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18183 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 23 16:28:04 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967768942' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 16:28:05 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28211 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:05 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18201 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:05 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: session ls {prefix=session ls} (starting...)
Nov 23 16:28:05 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz Can't run that command on an inactive MDS!
Nov 23 16:28:05 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27070 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673830274' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:28:05 np0005532761 ceph-mds[96457]: mds.cephfs.compute-0.jcbopz asok_command: status {prefix=status} (starting...)
Nov 23 16:28:05 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:05 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:05 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:05.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/654075312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:28:05 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694766729' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27100 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103171494' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117868061' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28274 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mgr[74869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:28:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T21:28:06.361+0000 7f09354b6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:28:06 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27112 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18264 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mgr[74869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:28:06 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T21:28:06.520+0000 7f09354b6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:28:06 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:06 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:06 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:06.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405757265' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:28:06 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1408: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 23 16:28:06 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225411219' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078253173' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538479727' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 16:28:07 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:07 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:07 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:07.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397719647' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:28:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:28:07.549Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:28:07 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28340 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:28:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:28:07 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:28:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:28:07 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18336 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:07 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27166 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:07 np0005532761 ceph-mgr[74869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:28:07 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: 2025-11-23T21:28:07.898+0000 7f09354b6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:28:07 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2164303278' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:28:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:28:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:28:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:28:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:07 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:28:08 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28361 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:08 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18357 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:28:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285252654' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:28:08 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28391 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:08 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:08 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:08 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:08.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:08 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18372 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a27945e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915778 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.203216553s of 58.206161499s, submitted: 1
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915910 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917438 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.999002457s of 12.150321007s, submitted: 11
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916831 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916699 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a273ddc00 session 0x559a2552e1e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916699 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916699 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.842432022s of 17.849073410s, submitted: 2
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916831 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 1613824 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919871 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919264 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.914249420s of 14.992810249s, submitted: 13
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed000 session 0x559a256f4d20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a271d1400 session 0x559a28175680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919132 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919132 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 1589248 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.988170624s of 10.991490364s, submitted: 1
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919264 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,1])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919396 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a273dcc00 session 0x559a272352c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.203738213s of 11.307200432s, submitted: 11
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918805 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ec000 session 0x559a28126000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 1531904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 1531904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 1531904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.309871674s of 44.328369141s, submitted: 5
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 1482752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 1482752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 1482752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.135511398s of 11.187865257s, submitted: 9
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917798 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a2652af00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.318428040s of 29.321868896s, submitted: 1
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: mgrc ms_handle_reset ms_handle_reset con 0x559a2604c800
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/844402651
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/844402651,v1:192.168.122.100:6801/844402651]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: mgrc handle_mgr_configure stats_period=5
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1343488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.810980797s of 13.847840309s, submitted: 10
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917798 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a27deaf00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917950 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.371376038s of 20.374578476s, submitted: 1
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918082 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919610 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ecc00 session 0x559a2654c1e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919610 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.801570892s of 16.847188950s, submitted: 10
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919310 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919594 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 1318912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921122 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.872009277s of 11.917662621s, submitted: 11
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1277952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920515 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1261568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1261568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1261568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.4 total, 600.0 interval#012Cumulative writes: 7984 writes, 31K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7984 writes, 1682 syncs, 4.75 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 809 writes, 1426 keys, 809 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 809 writes, 400 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.079       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a23de5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed400 session 0x559a264223c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread fragmentation_score=0.000028 took=0.000040s
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920383 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.169040680s of 53.178115845s, submitted: 3
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920515 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922043 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 1212416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921436 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.782189369s of 13.827088356s, submitted: 11
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed800 session 0x559a28391680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921304 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 1171456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.232078552s of 33.253677368s, submitted: 1
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921452 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 1163264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 1155072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 1155072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921452 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1138688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1138688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 1138688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.656509399s of 10.978686333s, submitted: 9
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921152 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a28ac9e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920713 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920713 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920713 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.745338440s of 18.042085648s, submitted: 3
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922373 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923885 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 1130496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.843791962s of 13.889540672s, submitted: 11
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923585 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923737 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923737 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a271d1400 session 0x559a28ad2000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 1122304 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.843379974s of 14.849747658s, submitted: 1
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 1105920 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923809 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1007616 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923737 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 909312 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 909312 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.654523849s of 10.987039566s, submitted: 220
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923885 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1941504 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1941504 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923885 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1941504 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1933312 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ec000 session 0x559a2569c780
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922687 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.214892387s of 11.249080658s, submitted: 9
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922687 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1900544 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.709618568s of 11.720647812s, submitted: 3
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1916928 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1916928 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.265705109s of 12.293758392s, submitted: 7
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922403 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26507400 session 0x559a28ab8960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a28c64000 session 0x559a2572fe00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922555 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.887575150s of 30.890459061s, submitted: 1
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 1875968 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924347 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1826816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1826816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1826816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.551795959s of 11.675850868s, submitted: 9
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924347 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 1785856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 1785856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 1785856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a26529c00 session 0x559a283950e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924047 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923608 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.047493935s of 14.154020309s, submitted: 11
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 1794048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923624 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.941125870s of 10.020229340s, submitted: 9
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923624 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 1777664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923324 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a271d1400 session 0x559a2569da40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923476 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.143291473s of 46.153369904s, submitted: 3
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923608 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 1769472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925136 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 ms_handle_reset con 0x559a276ed800 session 0x559a28c674a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1753088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925136 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925136 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.952695847s of 17.990190506s, submitted: 10
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xfb05e/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1744896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0xfd14a/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987166 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 18448384 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 145 ms_handle_reset con 0x559a26529c00 session 0x559a28ac9c20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 18415616 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18350080 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 146 ms_handle_reset con 0x559a271d1400 session 0x559a28ac9e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbe58000/0x0/0x4ffc00000, data 0x9013a2/0x9b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 18333696 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024111 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.934324265s of 11.164364815s, submitted: 47
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb9e6000/0x0/0x4ffc00000, data 0xd734aa/0xe26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 18366464 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027270 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027138 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a26507400 session 0x559a274de960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027138 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027138 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e2000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.150621414s of 23.169740677s, submitted: 16
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026430 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18341888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026446 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 18382848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025687 data_alloc: 218103808 data_used: 110592
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025839 data_alloc: 218103808 data_used: 114688
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 18374656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a28c64c00 session 0x559a27e3bc20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a28c64800 session 0x559a266a7860
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a26507400 session 0x559a28482780
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18350080 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.736700058s of 17.784509659s, submitted: 11
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a26529c00 session 0x559a28482b40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 18350080 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a271d1400 session 0x559a279501e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 10330112 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 ms_handle_reset con 0x559a28c64c00 session 0x559a27950960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd7547c/0xe29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 10330112 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049081 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 10330112 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb9df000/0x0/0x4ffc00000, data 0xd77568/0xe2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c64400 session 0x559a2814ef00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a26507400 session 0x559a2814f0e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a26529c00 session 0x559a2814fe00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a271d1400 session 0x559a269d4000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c64000 session 0x559a283734a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c65c00 session 0x559a28395c20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9db000/0x0/0x4ffc00000, data 0xd796b8/0xe30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053807 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c64c00 session 0x559a264223c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9db000/0x0/0x4ffc00000, data 0xd796b8/0xe30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c65400 session 0x559a28126b40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9db000/0x0/0x4ffc00000, data 0xd796b8/0xe30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 ms_handle_reset con 0x559a28c65800 session 0x559a28126780
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055621 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0xd796c8/0xe31000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0xd796c8/0xe31000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.069388390s of 16.428546906s, submitted: 19
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058211 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d7000/0x0/0x4ffc00000, data 0xd7b69a/0xe34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058211 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:28:08 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973628488' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 10321920 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d7000/0x0/0x4ffc00000, data 0xd7b69a/0xe34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 10190848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063301 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 10182656 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a279c41e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb9d0000/0x0/0x4ffc00000, data 0xd8369a/0xe3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a280ab680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 10166272 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.318939209s of 31.515851974s, submitted: 19
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079280 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a2742b860
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a28ab8000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64c00 session 0x559a28ad2960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65400 session 0x559a283910e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65800 session 0x559a26424f00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 9756672 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 9756672 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a27e3ab40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 9674752 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 9674752 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb815000/0x0/0x4ffc00000, data 0xf3e69a/0xff7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a256f25a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 9674752 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb815000/0x0/0x4ffc00000, data 0xf3e69a/0xff7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64c00 session 0x559a27de9e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085882 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65400 session 0x559a28ad3e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 9691136 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 9691136 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088638 data_alloc: 218103808 data_used: 7491584
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088638 data_alloc: 218103808 data_used: 7491584
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.456800461s of 17.521381378s, submitted: 23
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 9019392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb813000/0x0/0x4ffc00000, data 0xf3f69a/0xff8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,1,1])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 7118848 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116592 data_alloc: 218103808 data_used: 7553024
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x106f69a/0x1128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116608 data_alloc: 218103808 data_used: 7553024
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 4005888 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x106f69a/0x1128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d1400 session 0x559a28ad2f00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.697155952s of 10.029232979s, submitted: 58
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97746944 unmapped: 4980736 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26507400 session 0x559a27950d20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071094 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0xd8469a/0xe3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97771520 unmapped: 4956160 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a26424d20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28ad2000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a2839c000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0xd8469a/0xe3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ed400 session 0x559a279514a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067049 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 4947968 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a283905a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067049 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067049 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.673070908s of 18.772710800s, submitted: 31
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067197 data_alloc: 218103808 data_used: 6930432
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a2839c1e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27defa40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 4939776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28c96000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64c00 session 0x559a27944d20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a260d65a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a256f45a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a28ab83c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a2742bc20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65400 session 0x559a2654cd20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97951744 unmapped: 19472384 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142695 data_alloc: 218103808 data_used: 6930432
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 19456000 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 19456000 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 19374080 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 19374080 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.757060051s of 12.911256790s, submitted: 44
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a256f03c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97812480 unmapped: 19611648 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143476 data_alloc: 218103808 data_used: 6942720
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 97828864 unmapped: 19595264 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 12050432 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 12050432 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105373696 unmapped: 12050432 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203384 data_alloc: 234881024 data_used: 15896576
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203384 data_alloc: 234881024 data_used: 15896576
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 12017664 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.322823524s of 12.350434303s, submitted: 9
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 10035200 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e9a000/0x0/0x4ffc00000, data 0x17196ec/0x17d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1,1,2])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 9838592 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 8495104 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 8462336 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 8462336 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 8454144 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 8421376 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 8421376 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 8421376 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109010944 unmapped: 8413184 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109010944 unmapped: 8413184 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 8404992 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109027328 unmapped: 8396800 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109027328 unmapped: 8396800 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251288 data_alloc: 234881024 data_used: 16134144
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 8388608 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 8364032 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 8364032 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251896 data_alloc: 234881024 data_used: 16195584
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 8355840 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1bda6ec/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 8347648 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.471483231s of 25.635581970s, submitted: 53
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a2845c3c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a28c661e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 15810560 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28ad21e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081014 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101629952 unmapped: 15794176 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.068742752s of 21.379354477s, submitted: 53
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27490c00 session 0x559a27ded2c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa53a000/0x0/0x4ffc00000, data 0x107b67a/0x1132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101558 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa53a000/0x0/0x4ffc00000, data 0x107b67a/0x1132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa53a000/0x0/0x4ffc00000, data 0x107b67a/0x1132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 14721024 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a28c94960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103363 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a28c67a40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 15753216 heap: 117424128 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.837812424s of 10.922425270s, submitted: 19
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a260d61e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 29384704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187558 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d5000/0x0/0x4ffc00000, data 0x1be067a/0x1c97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101761024 unmapped: 29384704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 29376512 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 29376512 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d5000/0x0/0x4ffc00000, data 0x1be067a/0x1c97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 29376512 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99d5000/0x0/0x4ffc00000, data 0x1be067a/0x1c97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a2814e960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 29114368 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192303 data_alloc: 218103808 data_used: 7024640
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 23003136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99b0000/0x0/0x4ffc00000, data 0x1c0469d/0x1cbc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291407 data_alloc: 234881024 data_used: 21819392
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99b0000/0x0/0x4ffc00000, data 0x1c0469d/0x1cbc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 18317312 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291863 data_alloc: 234881024 data_used: 21831680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 18300928 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.519432068s of 16.630643845s, submitted: 19
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 14647296 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f93d8000/0x0/0x4ffc00000, data 0x21dc69d/0x2294000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 15482880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f935e000/0x0/0x4ffc00000, data 0x225669d/0x230e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359559 data_alloc: 234881024 data_used: 22806528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 14876672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 14860288 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16400384 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357143 data_alloc: 234881024 data_used: 22876160
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f933d000/0x0/0x4ffc00000, data 0x227769d/0x232f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f933d000/0x0/0x4ffc00000, data 0x227769d/0x232f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 16384000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.680171967s of 14.030892372s, submitted: 63
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357415 data_alloc: 234881024 data_used: 22876160
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 16302080 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 16302080 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cad800 session 0x559a260d63c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9334000/0x0/0x4ffc00000, data 0x228069d/0x2338000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 16039936 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc6000/0x0/0x4ffc00000, data 0x29ee69d/0x2aa6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413257 data_alloc: 234881024 data_used: 22876160
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 16031744 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a2652af00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27682000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a28ac90e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a28ac9e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413009 data_alloc: 234881024 data_used: 22876160
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 16023552 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 15949824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458305 data_alloc: 251658240 data_used: 27918336
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 10723328 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.607965469s of 18.688673019s, submitted: 14
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 10715136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 10715136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458305 data_alloc: 251658240 data_used: 27918336
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 10715136 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8bc3000/0x0/0x4ffc00000, data 0x29f169d/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 10706944 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 10362880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1486335 data_alloc: 251658240 data_used: 28082176
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f881f000/0x0/0x4ffc00000, data 0x2d9469d/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1486335 data_alloc: 251658240 data_used: 28082176
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f881f000/0x0/0x4ffc00000, data 0x2d9469d/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cadc00 session 0x559a256f2000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 9371648 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f881f000/0x0/0x4ffc00000, data 0x2d9469d/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.759673119s of 13.877911568s, submitted: 25
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a27982d20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.4 total, 600.0 interval#012Cumulative writes: 9526 writes, 35K keys, 9526 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9526 writes, 2376 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1542 writes, 4300 keys, 1542 commit groups, 1.0 writes per commit group, ingest: 3.82 MB, 0.01 MB/s#012Interval WAL: 1542 writes, 694 syncs, 2.22 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362263 data_alloc: 234881024 data_used: 21254144
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 12820480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a260d94a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a276823c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9331000/0x0/0x4ffc00000, data 0x228369d/0x233b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27e3b4a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 24535040 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24518656 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098391 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65000 session 0x559a279830e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a27a0be00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a27e3a000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 24502272 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a27bbc780
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.692840576s of 34.780414581s, submitted: 28
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27bbc1e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c65c00 session 0x559a264254a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a269d43c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a274def00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a274ded20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d6000/0x0/0x4ffc00000, data 0xedd6ec/0xf96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118996 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27427c20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 24698880 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc000 session 0x559a2569de00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d6000/0x0/0x4ffc00000, data 0xedd6ec/0xf96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 24543232 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a274261e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a264230e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 24469504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 24444928 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120810 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120810 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24436736 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0xedd6fc/0xf97000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.602138519s of 17.687946320s, submitted: 33
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24576000 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129046 data_alloc: 218103808 data_used: 7233536
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 23732224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa534000/0x0/0x4ffc00000, data 0x106f6fc/0x1129000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139544 data_alloc: 218103808 data_used: 7049216
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 24068096 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa534000/0x0/0x4ffc00000, data 0x106f6fc/0x1129000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134264 data_alloc: 218103808 data_used: 7053312
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 24756224 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x10906fc/0x114a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x10906fc/0x114a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.840806007s of 14.135817528s, submitted: 62
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134512 data_alloc: 218103808 data_used: 7053312
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x10966fc/0x1150000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 24707072 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa51c000/0x0/0x4ffc00000, data 0x10966fc/0x1150000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x10a46fc/0x115e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135448 data_alloc: 218103808 data_used: 7061504
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 24428544 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x10a46fc/0x115e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.418850899s of 11.506878853s, submitted: 4
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a2572fa40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a256f3a40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157513 data_alloc: 218103808 data_used: 7061504
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x10a46fc/0x115e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fcc00 session 0x559a274de960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27bbcd20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a260d9c20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a27944b40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a2814e5a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157513 data_alloc: 218103808 data_used: 7061504
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fd000 session 0x559a26424f00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27e3c960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 24264704 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167829 data_alloc: 218103808 data_used: 8433664
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa393000/0x0/0x4ffc00000, data 0x121f6fc/0x12d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.438807487s of 11.514143944s, submitted: 24
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa38f000/0x0/0x4ffc00000, data 0x12236fc/0x12dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167725 data_alloc: 218103808 data_used: 8433664
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa38f000/0x0/0x4ffc00000, data 0x12236fc/0x12dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 24993792 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa38f000/0x0/0x4ffc00000, data 0x12236fc/0x12dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 20389888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e82000/0x0/0x4ffc00000, data 0x17226fc/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 20389888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e70000/0x0/0x4ffc00000, data 0x17426fc/0x17fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214197 data_alloc: 234881024 data_used: 9682944
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e70000/0x0/0x4ffc00000, data 0x17426fc/0x17fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.319500923s of 11.676329613s, submitted: 95
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e6d000/0x0/0x4ffc00000, data 0x17456fc/0x17ff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e6d000/0x0/0x4ffc00000, data 0x17456fc/0x17ff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214593 data_alloc: 234881024 data_used: 9682944
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e68000/0x0/0x4ffc00000, data 0x174a6fc/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214529 data_alloc: 234881024 data_used: 9682944
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.610642433s of 11.667161942s, submitted: 5
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a279443c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214553 data_alloc: 234881024 data_used: 9682944
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e65000/0x0/0x4ffc00000, data 0x174d6fc/0x1807000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e65000/0x0/0x4ffc00000, data 0x174d6fc/0x1807000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217213 data_alloc: 234881024 data_used: 9666560
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x17526fc/0x180c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 20004864 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27445c00 session 0x559a266a6780
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27444800 session 0x559a2654de00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x17526fc/0x180c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217213 data_alloc: 234881024 data_used: 9666560
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273ddc00 session 0x559a279c4960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.261064529s of 11.288570404s, submitted: 18
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 19996672 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273dd400 session 0x559a258be000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e5d000/0x0/0x4ffc00000, data 0x17556fc/0x180f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 19988480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215373 data_alloc: 234881024 data_used: 9666560
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 19988480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 19988480 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e57000/0x0/0x4ffc00000, data 0x175b6fc/0x1815000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 19947520 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e57000/0x0/0x4ffc00000, data 0x175b6fc/0x1815000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,1])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 19824640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e57000/0x0/0x4ffc00000, data 0x175b6fc/0x1815000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215309 data_alloc: 234881024 data_used: 9666560
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.961704254s of 10.976827621s, submitted: 243
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a44000/0x0/0x4ffc00000, data 0x175e6fc/0x1818000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215333 data_alloc: 234881024 data_used: 9666560
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 19693568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a44000/0x0/0x4ffc00000, data 0x175e6fc/0x1818000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a44000/0x0/0x4ffc00000, data 0x175e6fc/0x1818000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19668992 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217369 data_alloc: 234881024 data_used: 9654272
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a41000/0x0/0x4ffc00000, data 0x17616fc/0x181b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 19628032 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.807581902s of 12.267666817s, submitted: 20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215849 data_alloc: 234881024 data_used: 9654272
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a3c000/0x0/0x4ffc00000, data 0x17666fc/0x1820000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 19570688 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a280e34a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273dd400 session 0x559a274270e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145873 data_alloc: 218103808 data_used: 7045120
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x10d36fc/0x118d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa0cf000/0x0/0x4ffc00000, data 0x10d36fc/0x118d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 21856256 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28c64000 session 0x559a25580d20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27e3ab40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.195116997s of 10.299222946s, submitted: 33
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27e3da40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21839872 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 21831680 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 21831680 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113268 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 21831680 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.175165176s of 22.234811783s, submitted: 17
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a28ac92c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 21413888 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138546 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 21397504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a273dd400 session 0x559a256f10e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x10c367a/0x117a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 21397504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 21397504 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 20742144 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 20742144 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a2569da40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2803be00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161579 data_alloc: 234881024 data_used: 10174464
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 21610496 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a27bbd2c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116093 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a255801e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a25581e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 21577728 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27befc00 session 0x559a2742c960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27a0a1e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.908199310s of 18.412792206s, submitted: 32
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2742d680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a283901e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143635 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27a0ab40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a281b4c00 session 0x559a280e3680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a274285a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 21872640 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa19c000/0x0/0x4ffc00000, data 0x10086dc/0x10c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2572e1e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 22339584 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144444 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 22298624 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa19c000/0x0/0x4ffc00000, data 0x10086dc/0x10c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109404160 unmapped: 21741568 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a27951680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27de9a40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21774336 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21774336 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a281b5400 session 0x559a27de7e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 21086208 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 21078016 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122072 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 21069824 heap: 131145728 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.809024811s of 37.447509766s, submitted: 86
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a264234a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a256f2f00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a2845cb40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2803a3c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a281b5000 session 0x559a26539680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 29605888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa3a9000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 29605888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109936640 unmapped: 29605888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194133 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x16f867a/0x17af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a2654c960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196070 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 29630464 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262190 data_alloc: 234881024 data_used: 16707584
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 24518656 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x16f869d/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262190 data_alloc: 234881024 data_used: 16707584
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 24485888 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.052062988s of 19.211286545s, submitted: 24
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19505152 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 16826368 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 16408576 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 16408576 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371334 data_alloc: 234881024 data_used: 18436096
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 16228352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 16228352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 16228352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 16203776 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 16203776 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371350 data_alloc: 234881024 data_used: 18436096
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 16171008 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 16138240 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372414 data_alloc: 234881024 data_used: 18464768
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 16130048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 16130048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8f13000/0x0/0x4ffc00000, data 0x228969d/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.596179008s of 15.956642151s, submitted: 86
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2803a1e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 25747456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a28ad32c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133902 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 26370048 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.090698242s of 22.205930710s, submitted: 37
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a28ad3860
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a284703c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 26337280 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a2839cf00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2552f4a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a27decd20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181157 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e97000/0x0/0x4ffc00000, data 0x130e67a/0x13c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a28ab9860
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181157 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cacc00 session 0x559a27e38960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a258bfe00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2552ef00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 26345472 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 26443776 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 25337856 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222158 data_alloc: 234881024 data_used: 12767232
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222158 data_alloc: 234881024 data_used: 12767232
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x130e69d/0x13c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27431680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cadc00 session 0x559a260d9c20
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cac000 session 0x559a260d9a40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cac000 session 0x559a260d94a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 24829952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.390014648s of 17.499362946s, submitted: 25
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a274270e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a27982000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a279825a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cadc00 session 0x559a279823c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a279830e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9e94000/0x0/0x4ffc00000, data 0x130e6d6/0x13c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 24739840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x179170f/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 18194432 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 21217280 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351827 data_alloc: 234881024 data_used: 13385728
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 18694144 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 18694144 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x211c70f/0x21d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353347 data_alloc: 234881024 data_used: 13520896
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 18685952 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28415 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:08 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1409: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a256f2000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x211c70f/0x21d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x211c70f/0x21d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 19898368 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 19472384 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 16842752 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 16842752 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380455 data_alloc: 234881024 data_used: 18247680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 16842752 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 16809984 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380455 data_alloc: 234881024 data_used: 18247680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122765312 unmapped: 16777216 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122798080 unmapped: 16744448 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122798080 unmapped: 16744448 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f905f000/0x0/0x4ffc00000, data 0x214370f/0x21fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.176345825s of 21.564163208s, submitted: 117
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 123240448 unmapped: 16302080 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122396672 unmapped: 17145856 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409185 data_alloc: 234881024 data_used: 18366464
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122396672 unmapped: 17145856 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d66000/0x0/0x4ffc00000, data 0x243c70f/0x24f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 17014784 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411779 data_alloc: 234881024 data_used: 18366464
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d66000/0x0/0x4ffc00000, data 0x243c70f/0x24f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d66000/0x0/0x4ffc00000, data 0x243c70f/0x24f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 17006592 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411779 data_alloc: 234881024 data_used: 18366464
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 16998400 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.063998222s of 12.514191628s, submitted: 29
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d64000/0x0/0x4ffc00000, data 0x243d70f/0x24f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411635 data_alloc: 234881024 data_used: 18378752
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 16932864 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2569de00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a28cac000 session 0x559a2845c960
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f8d64000/0x0/0x4ffc00000, data 0x243d70f/0x24f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a2610fc00 session 0x559a27e3d680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f92b7000/0x0/0x4ffc00000, data 0x1c9d69d/0x1d55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 18513920 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a2654cf00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316547 data_alloc: 234881024 data_used: 13529088
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.850693703s of 10.007095337s, submitted: 51
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a2610fc00 session 0x559a2742b680
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b69d/0xe33000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 23699456 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 23691264 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 23691264 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 23691264 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154171 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 23683072 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.863340378s of 25.891838074s, submitted: 8
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a28175e00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a27950f00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a2552f4a0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc400 session 0x559a27de6780
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fc800 session 0x559a2742c000
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26f000/0x0/0x4ffc00000, data 0xf3667a/0xfed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26f000/0x0/0x4ffc00000, data 0xf3667a/0xfed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177188 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 23871488 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26f000/0x0/0x4ffc00000, data 0xf3667a/0xfed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 23863296 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 23863296 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fdc00 session 0x559a260d7860
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 23863296 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179125 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26e000/0x0/0x4ffc00000, data 0xf3669d/0xfee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26e000/0x0/0x4ffc00000, data 0xf3669d/0xfee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189765 data_alloc: 218103808 data_used: 8372224
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa26e000/0x0/0x4ffc00000, data 0xf3669d/0xfee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189765 data_alloc: 218103808 data_used: 8372224
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 23855104 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.808490753s of 19.874235153s, submitted: 26
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120709120 unmapped: 18833408 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 19079168 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 19079168 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9870000/0x0/0x4ffc00000, data 0x192c69d/0x19e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 19070976 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269811 data_alloc: 218103808 data_used: 8970240
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 18989056 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 18989056 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 18989056 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9859000/0x0/0x4ffc00000, data 0x194b69d/0x1a03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261691 data_alloc: 218103808 data_used: 8970240
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9859000/0x0/0x4ffc00000, data 0x194b69d/0x1a03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9859000/0x0/0x4ffc00000, data 0x194b69d/0x1a03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 20643840 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.244734764s of 12.532417297s, submitted: 93
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a2610fc00 session 0x559a27de8b40
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a276ec000 session 0x559a27decf00
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 22421504 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 22413312 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 22413312 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 22396928 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 22388736 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 22380544 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 22372352 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 22405120 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config diff' '{prefix=config diff}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config show' '{prefix=config show}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 22331392 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 22192128 heap: 139542528 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'log dump' '{prefix=log dump}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 22003712 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'perf dump' '{prefix=perf dump}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'perf schema' '{prefix=perf schema}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 33570816 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 33562624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 33554432 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 33554432 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 33554432 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 33546240 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 33546240 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 33546240 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 33546240 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 33546240 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 33538048 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 33529856 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 33521664 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 33513472 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.4 total, 600.0 interval#012Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3290 syncs, 3.54 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2135 writes, 6899 keys, 2135 commit groups, 1.0 writes per commit group, ingest: 8.10 MB, 0.01 MB/s#012Interval WAL: 2135 writes, 914 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 33505280 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 33497088 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 33488896 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 33480704 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 33480704 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 33480704 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 33480704 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 33480704 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 33480704 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 33480704 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 33472512 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 33464320 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33456128 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33456128 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33456128 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33456128 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33456128 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 33447936 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 33439744 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 33431552 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 33431552 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 33431552 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 33431552 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 33431552 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 33431552 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 33431552 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 33423360 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 33415168 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 33406976 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33398784 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161001 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33390592 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33390592 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33390592 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33390592 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33390592 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 317.181915283s of 317.285064697s, submitted: 32
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33382400 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,1])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 33226752 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 33038336 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 33030144 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 33021952 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117571584 unmapped: 33013760 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117579776 unmapped: 33005568 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 32997376 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 32989184 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117604352 unmapped: 32980992 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117604352 unmapped: 32980992 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32972800 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 32964608 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 32956416 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117637120 unmapped: 32948224 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 32940032 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 32931840 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 32923648 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 32915456 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 32907264 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 32899072 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 32890880 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 32882688 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:28:08.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 23 16:28:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:28:08.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 23 16:28:08 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:28:08.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a26529c00 session 0x559a27df10e0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a271d3c00 session 0x559a27de83c0
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 32874496 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a27444800 session 0x559a27de7860
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 ms_handle_reset con 0x559a256fd800 session 0x559a2803a780
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 32866304 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 32858112 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0xd7b67a/0xe32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 32849920 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 32849920 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 32849920 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config diff' '{prefix=config diff}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config show' '{prefix=config show}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 32538624 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160709 data_alloc: 218103808 data_used: 6934528
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32530432 heap: 150585344 old mem: 2845415832 new mem: 2845415832
Nov 23 16:28:08 np0005532761 ceph-osd[83114]: do_command 'log dump' '{prefix=log dump}'
Nov 23 16:28:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18405 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987271107' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27226 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:09 np0005532761 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:28:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18420 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:09 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:09 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:09 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:09.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/213428320' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3923006443' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28451 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27241 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18441 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 23 16:28:09 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602941432' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28472 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27253 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28478 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28493 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27268 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:10 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:10 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:10 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:10.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18480 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1410: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:28:10 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 23 16:28:10 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2055775712' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 16:28:10 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27286 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18504 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28526 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27304 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:11 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 23 16:28:11 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685656778' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 16:28:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18528 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:11 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:11 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:11 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:11.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28547 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:11 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18558 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:11 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:28:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:28:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:28:12 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:12 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:28:12 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27334 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520962965' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519341427' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 16:28:12 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27346 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:12 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:12 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:12 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:12.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/811161713' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 16:28:12 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1411: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:28:12 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27355 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 23 16:28:12 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737878222' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 16:28:13 np0005532761 nova_compute[257263]: 2025-11-23 21:28:13.044 257267 DEBUG oslo_service.periodic_task [None req-2567dae2-70cf-43fa-b258-1f9d9499d2b3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843252747' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074116635' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 16:28:13 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27367 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737923499' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 16:28:13 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:13 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:13 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:13.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 16:28:13 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065148221' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/608735071' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321771611' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 16:28:14 np0005532761 systemd[1]: Starting Hostname Service...
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2582421277' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 16:28:14 np0005532761 podman[295503]: 2025-11-23 21:28:14.295155156 +0000 UTC m=+0.082475468 container health_status e076919072002cb4fe9a3eb617d09500506359f33bb9eabb30253dd41e4d14a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 23 16:28:14 np0005532761 podman[295501]: 2025-11-23 21:28:14.302487114 +0000 UTC m=+0.089792105 container health_status 93ec961d707efba96c2b48bf4be571a7686031e11fcba32bcf7ee940d836047b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 16:28:14 np0005532761 systemd[1]: Started Hostname Service.
Nov 23 16:28:14 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28721 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:14 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:14 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:14 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:14.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:14 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18717 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:14 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1412: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 16:28:14 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28739 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 23 16:28:14 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027232004' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 16:28:15 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18744 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:15 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28751 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:15 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:15 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28769 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:15 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:15 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:15 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:15.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:15 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18777 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:15 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28790 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:15 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 16:28:15 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1104105981' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18807 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28817 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 23 16:28:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580969354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18831 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:16 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:16 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:16 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:16.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28835 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27490 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1413: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:28:16 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:28:16 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913007999' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18852 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:16 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:28:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:28:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:16 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:28:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-2-0-compute-0-bfglcy[269924]: 23/11/2025 21:28:17 : epoch 692378c6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28865 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:28:17 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 23 16:28:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/568664151' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18876 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27508 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27517 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28892 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:17 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:17 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:17.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:28:17.551Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:28:17 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-0-oyehye[74865]: ::ffff:192.168.122.100 - - [23/Nov/2025:21:28:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: [prometheus INFO cherrypy.access.139677029282720] ::ffff:192.168.122.100 - - [23/Nov/2025:21:28:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18915 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27541 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898535550' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.28955 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27565 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.18954 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 16:28:18 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:18 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:18 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:18.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4173577089' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-mgr[74869]: log_channel(cluster) log [DBG] : pgmap v1414: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 23 16:28:18 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967447396' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 16:28:18 np0005532761 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-alertmanager-compute-0[104744]: ts=2025-11-23T21:28:18.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 23 16:28:19 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27592 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 23 16:28:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141405296' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 16:28:19 np0005532761 ceph-mgr[74869]: log_channel(audit) log [DBG] : from='client.27604 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 16:28:19 np0005532761 radosgw[95430]: ====== starting new request req=0x7f1ba47f95d0 =====
Nov 23 16:28:19 np0005532761 radosgw[95430]: ====== req done req=0x7f1ba47f95d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:19 np0005532761 radosgw[95430]: beast: 0x7f1ba47f95d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:19.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:19 np0005532761 ceph-mon[74569]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 23 16:28:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668592079' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 16:28:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:28:19 np0005532761 ceph-mon[74569]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
